Parse cloud job set() function weirdness - parse-platform

I'm trying to run this cloud job weekly on parse where I assign a rank to players based on their high scores. This piece of code mostly seems to work except it only sets ranks from 1 to 9. Anything with more than one digit does not get set!
The job returns a success after setting ranks from 1-9.
Parse.Cloud.job("TestJob", function(request, status)
{
Parse.Cloud.useMasterKey();
var rank = 0;
var usersQuery = new Parse.Query("ECJUser").descending("HighScore");
usersQuery.find(function(results){
for(var i=0;i<results.length;++i)
{
rank += 1;
console.log("Setting "+results[i].get('Name')+" rank to "+rank);
results[i].save({"Rank": rank});
}
}).then(function(){
status.success("Weekly Ranks Assigned");
}, function(error){
status.error("Uh oh. Weekly ranking failed");
})
})
In the console log, it clearly says "setting playerName rank to 11", but it doesn't actually set anything in the parse table. Just undefined (or what ever it was previously).
Does the code look right? Something javascript related that I'm missing?
Updated based on answers:
Apparently I'm not waiting for the jobs to complete. But I'm not sure how to write code for handling promises. Here's what I have:
var usersQuery = new Parse.Query("ECJUser").descending("HighScore");
usersQuery.find().then(function(results)
{
var promises = [];
for(var i=0;i<results.length;i++)
{
promises.append(results[i].save({"Rank":rank}));
}
return promises;
})
What do I do with the list of promises? where do I wait for them to complete?

Your code does not wait for saves to complete so it's going to have unpredictable results. It also isn't going to run through all users, just the first 'page' returned by the query.
So, instead of using find you should consider using each. You also need to consider wether the job will have time to process all users and may need to run multiple times.
For the save you should add each promise that is returned to an array and then wait for all of the promises to complete before calling status.success.

Related

How does Apollo paginated "read" and "merge" work?

I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.
Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data

Firestore transaction produces console error: FAILED_PRECONDITION: the stored version does not match the required base version

I have written a bit of code that allows a user to upvote / downvote recipes in a manner similar to Reddit.
Each individual vote is stored in a Firestore collection named votes, with a structure like this:
{username,recipeId,value} (where value is either -1 or 1)
The recipes are stored in the recipes collection, with a structure somewhat like this:
{title,username,ingredients,instructions,score}
Each time a user votes on a recipe, I need to record their vote in the votes collection, and update the score on the recipe. I want to do this as an atomic operation using a transaction, so there is no chance the two values can ever become out of sync.
Following is the code I have so far. I am using Angular 6, however I couldn't find any Typescript examples showing how to handle multiple gets() in a single transaction, so I ended up adapting some Promise-based JavaScript code that I found.
The code seems to work, but there is something happening that is concerning. When I click the upvote/downvote buttons in rapid succession, some console errors occasionally appear. These read POST https://firestore.googleapis.com/v1beta1/projects/myprojectname/databases/(default)/documents:commit 400 (). When I look at the actual response from the server, I see this:
{
"error": {
"code": 400,
"message": "the stored version (1534122723779132) does not match the required base version (0)",
"status": "FAILED_PRECONDITION"
}
}
Note that the errors do not appear when I click the buttons slowly.
Should I worry about this error, or is it just a normal result of the transaction retrying? As noted in the Firestore documentation, a "function calling a transaction (transaction function) might run more than once if a concurrent edit affects a document that the transaction reads."
Note that I have tried wrapping try/catch blocks around every single operation below, and there are no errors thrown. I removed them before posting for the sake of making the code easier to follow.
Very interested in hearing any suggestions for improving my code, regardless of whether they're related to the HTTP 400 error.
async vote(username, recipeId, direction) {
let value;
if ( direction == 'up' ) {
value = 1;
}
if ( direction == 'down' ) {
value = -1;
}
// assemble vote object to be recorded in votes collection
const voteObj: Vote = { username: username, recipeId: recipeId , value: value };
// get references to both vote and recipe documents
const voteDocRef = this.afs.doc(`votes/${username}_${recipeId}`).ref;
const recipeDocRef = this.afs.doc('recipes/' + recipeId).ref;
await this.afs.firestore.runTransaction( async t => {
const voteDoc = await t.get(voteDocRef);
const recipeDoc = await t.get(recipeDocRef);
const currentRecipeScore = await recipeDoc.get('score');
if (!voteDoc.exists) {
// This is a new vote, so add it to the votes collection
// and apply its value to the recipe's score
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + value) });
} else {
const voteData = voteDoc.data();
if ( voteData.value == value ) {
// existing vote is the same as the button that was pressed, so delete
// the vote document and revert the vote from the recipe's score
t.delete(voteDocRef);
t.update(recipeDocRef, { score: (currentRecipeScore - value) });
} else {
// existing vote is the opposite of the one pressed, so update the
// vote doc, then apply it to the recipe's score by doubling it.
// For example, if the current score is 1 and the user reverses their
// +1 vote by pressing -1, we apply -2 so the score will become -1.
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + (value*2))});
}
}
return Promise.resolve(true);
});
}
According to Firebase developer Nicolas Garnier, "What you are experiencing here is how Transactions work in Firestore: one of the transactions failed to write because the data has changed in the mean time, in this case Firestore re-runs the transaction again, until it succeeds. In the case of multiple Reviews being written at the same time some of them might need to be ran again after the first transaction because the data has changed. This is expected behavior and these errors should be taken more as warnings."
In other words, this is a normal result of the transaction retrying.
I used RxJS throttleTime to prevent the user from flooding the Firestore server with transactions by clicking the upvote/downvote buttons in rapid succession, and that greatly reduced the occurrences of this 400 error. In my app, there's no legitimate reason someone would need to clip upvote/downvote dozens of times per seconds. It's not a video game.

Parse.User query not working in Cloud Code

I am working on a project using Parse where I need some information calculated for each user and updated when they update their account. I created a Cloud Code trigger that does what I need whenever a user account is updated, and that is working well. However, I have about two thousand accounts that are already created that I need to update as well. After hours of trying to get a Cloud Job to work, I decided to try to simplify it. I wrote the following job to simply count the user accounts. To reiterate; I'm not actually trying to count the users, there are much more efficient ways to do that, I am trying to verify that I can query and loop over the existing user accounts. (The option to useMasterKey is in there because I will need that later.)
Parse.Cloud.job("getUserStatistics", function(request, status) {
// Set up to modify user data
Parse.Cloud.useMasterKey();
// Query for all users
var query = new Parse.Query(Parse.User);
var counter = 0;
query.each(function(user) {
counter = counter+1;
}).then(function() {
// Set the job's success status
status.success("Counted all User Accounts.");
}, function(error) {
// Set the job's error status
status.error("Failed to Count User Accounts.");
});
console.log('Found '+counter+' users.');
});
When I run the code, I get:
I2015-07-09T17:29:10.880Z]Found 0 users.
I2015-07-09T17:29:12.863Z]v99: Ran job getUserStatistics with:
Input: "{}"
Result: Counted all User Accounts.
Even more baffling to me, if I add:
query.limit(10);
...the query itself actually fails! (I would expect it to count 10 users.)
That said, if there is a simpler way to trigger an update on all the users in a Parse application, I'd love to hear it!
The reference actually says that:
The query may not have any sort order, and may not use limit or skip.
https://parse.com/docs/js/api/symbols/Parse.Query.html#each
So forget about "query.limit(10)", that's not relevant here.
Anyways, by their example for a background job, it seems you might have forgotten to put return in your "each" function. Plus, you called console.log('Found '+counter+' users.'); out side of your asynchronous task, that makes sense why you get 0 results. maybe you try:
query.each(function(user) {
counter = counter+1;
// you'll want to save your changes for each user,
// therefore, you will need this
return user.save();
}).then(function() {
// Set the job's success status
status.success("Counted all User Accounts.");
// console.log inside the asynchronous scope
console.log('Found '+counter+' users.');
}, function(error) {
// Set the job's error status
status.error("Failed to Count User Accounts.");
});
You can check again Parse's example of writing this cloud job.
https://parse.com/docs/js/guide#cloud-code-advanced-writing-a-background-job

How many data can save in one Parse.Object.saveAll request? And how many number of request will be used for one Parse.Object.saveAll

Recently, I have some test on parse.com. I am now facing a problem of using Parse.Object.saveAll in background job.
From the document in parse.com, it says that a background job can run for 15 minutes. I am now setting a background job to pour the data in the database using the following code:
Parse.Cloud.job("createData", function(request, status) {
var Dummy = Parse.Object.extend("dummy");
var batchSaveArr = [];
for(var i = 0 ; i < 50000 ; i ++){
var obj = new Dummy();
// genMessage() is a function to generate a random string with 5 characters long
obj.set("message", genMessage());
obj.set("numValue",Math.floor(Math.random() * 1000));
batchSaveArr.push(obj);
}
Parse.Object.saveAll(batchSaveArr, {
success: function(list){
status.success("success");
},
error: function(error){
status.error(error.message);
}
});
});
Although it is used to pour data into database, the main purpose is to test the function Parse.Object.saveAll. When I run this job, an error "This application has exceeded its request limit." is appeared in the log. However, when I see the analysis page, it show me that the request count is less than or equal to 1. I only run this job in Parse, and no other request is made during the background is running.
It seems that there is some problem on Parse.Object.saveAll. Or maybe I have some misunderstanding on this function.
Are there anyone facing the same problem?
How many data can save in one Parse.Object.saveAll request?
How many number of request will be used for one Parse.Object.saveAll
I have asked the question in Facebook and the reply is quite disappointed.
Please follow the link:
https://developers.facebook.com/bugs/439821726158766/

Model records ordering in Spine.js

As I can see in the Spine.js sources the Model.each() function returns Model's records in the order of their IDs. This is completely unreliable in scenarios where ordering is important: long person list etc.
Can you suggest a way to keep original records ordering (in the same order as they've arrived via refresh() or similar functions) ?
P.S.
Things are even worse because by default Spine.js internally uses new GUIDs as IDs. So records order is completely random which unacceptable.
EDIT:
Seems that in last commit https://github.com/maccman/spine/commit/116b722dd8ea9912b9906db6b70da7948c16948a
they made it possible, but I have not tested it myself because I switched from Spine to Knockout.
Bumped into the same problem learning spine.js. I'm using pure JS, so i was neglecting the the contact example http://spinejs.com/docs/example_contacts which helped out on this one. As a matter of fact, you can't really keep the ordering from the server this way, but you can do your own ordering with javascript.
Notice that i'm using the Element Pattern here. (http://spinejs.com/docs/controller_patterns)
First you set the function which is gonna do the sorting inside the model:
/*Extending the Student Model*/
Student.extend({
nameSort: function(a,b) {
if ((a.name || a.email) > (b.name || b.email))
return 1;
else
return -1
}
});
Then, in the students controller you set the elements using the sort:
/*Controller that manages the students*/
var Students = Spine.Controller.sub({
/*code ommited for simplicity*/
addOne: function(student){
var item = new StudentItem({item: student});
this.append(item.render());
},
addAll: function(){
var sortedByName = Student.all().sort(Student.nameSort);
var _self = this;
$.each(sortedByName, function(){_self.addOne(this)});
},
});
And that's it.

Resources