GOAL: 1) Enable users to play my game regardless of poor connectivity, and 2) have ~reliable user state stored on Parse for customer support and stats.
My Approach: I am using local client storage as the master (so that net connectivity is not required), and I am using Parse as a secondary synced storage so that I can address customer issues and maintain stats.
The game has a sequence of fixed levels.
User state for each level is stored locally in UserDefaults as the SOT (yes, a bit ugly).
I use Parse as a secondary replicated store for customer support issues and stats.
Thus, I also store the level state on Parse (i.e. PFObject="UserLevel": userId, level #, status, high score, #wins, #losses, etc.).
There should be only one (or zero) PFObject per level per user.
==> When the network connectivity is poor, I often create multiple PFObjects for the same level.
e.g. A typical 5 minute game session:
User unlocks a level:
==> Create PFObject "UserLevel": uid=currentUser, level=#, status=UNLOCKED, wins=0, etc.
User plays this level and loses:
==> Query (async) Parse for PFObject (match userId & level #):
If one matching object found ==> ++losses... (saveEventually)
If >1 matching objects found ==> object[0]:++losses... (saveEventually) (ignore all but one!)
Else (no matching objects found) ==> Create new PFObject. (saveEventually)
User plays again and wins:
==> Query (async) Parse for PFObject (match userId & level #)
If one matching object found: status=COMPLETED, ++wins... (saveEventually)
If >1 matching objects found, status=COMPLETED, ++wins... (saveEventually) (ignore all but one!)
Else (no matching objects found): create the PFObject. (saveEventually)
... etc...
As you can guess, if the network connectivity is slow, and the various queries and saveEventually's have not completed yet, this can easily result in duplicate PFObjects for the same level.
My guts tell me not to create the PFObject during an update if it does not exist. Instead, the Parse state these be sloppy/behind, and clean it up during app start (i.e. a clean sync).
I'm assuming this is a very common design pattern and that I'm missing some basic CS fundamentals. (I'm a newb to backend coding.)
The simple solution, since there are a fixed number of levels, is to pre-create each "UserLevel" and save them, and then everywhere else you just do updated instead of create/update.
Related
I have a question regarding doing pattern matching in Redis keys. Currently, I am storing a set of subscriptions, where keys are composite of different events. For example, if a subscription comes in as
S1 - {event:created, userId:1234, stateId:xyz}
It's stored in cache for matching as (events are sorted before creating the key)
event:created#stateId:xyz#userId:1234 = {S1}
Now there can be other events that can subscribe to this exact combination. But if an event comes if any of the three attributes, it will be matched with all the keys to which they are a substring. Example
event:created#stateId:xyz#userId:1234 = {S1,S2,S3}
event:started#stateId:xyz#userId:1234 = {S4,S5,S6}
event:created#stateId:abc#userId:1234 = {S7,S8,S9}
The following will be the event and subscription chart.
event:created -> S1,S2,S3,S7,S8,S9
event:started -> S4,S5,S6
state:xyz -> S1,S2,S3,S4,S5,S6
userId:1234 -> S1,S2,S3,S4,S5,S6,S7,S8,S9
stateId:abc and userId:1234 -> S1,S2,S3,S4,S5,S6,S7,S8,S9
I tried using a SCAN on Redis with a pattern match, but it takes a long time as my cache can have a lot of entries, and SCAN takes O(N) time.
Any idea how I can do this efficiently? Maybe by using a secondary structure in Redis like a Tree or something? Or any other Redis data structure I should look at?
Thanks
BlockExplorer - https://explorer.mainnet.near.org/blocks/2RPJGA17MQ9GAtwSVuVbasuosgkWqDgXHKWLuX4VyYv4
i am able to query starting from block - 9820221.
Can any one help me understand why this is the case and if there are other explorers where i can query the blockDetails
mainnet started from block height 9820210 (see mainnet genesis config), so there are no blocks before that one. The first 3 blocks are missing due to validators being offline or something like that, so the first produced block is 9820214, and you can query it: https://explorer.mainnet.near.org/blocks/CFAAJTVsw5y4GmMKNmuTNybxFJtapKcrarsTh5TPUyQf
Blocks before 9820210 were produced in mainnet running before July 22nd, 2020, but for some reason NEAR needed to restart the network from genesis, and thus we dumped the state as of block 9820210 and called it a new genesis, and that was the start. You have no access to the history before that moment, you can only inspect the state as of genesis, where certain accounts exist with certain balances, contract code, and states.
I have a scheduled script execution that needs to persist a value between runs. It is updated with each run. Using gs.setProperty seemed like the natural place until I came across this:
Care should be taken when setting system properties (sys_properties)
using this method as it causes a system-wide cache flush. Each flush
can cause system degradation while the caches rebuild. If a value must
be updated often, it should not be stored as a system property. In
general, you should only place values in the sys_properties table that
do not frequently change.
Creating a separate table to store a single scalar value seems like overkill. Is there a better place to store it?
You could set a preference if you need it in the instance. Another place could be the events table. Log the event with the data in parm1 or parm2 and on next run query the most recent event.
I'd avoid making a table as that has cost implications for some clients. I agree with the sys_properties.
var encrypter = new GlideEncrypter();
var encrypted = encrypter.encrypt('Super Secret Phrase');
gs.info('encrypted: ' + encrypted);
var decrypted = encrypter.decrypt(encrypted);
gs.info('decrypted: ' + decrypted);
/**
*** Script: encrypted: g/bXLJHa7xNRMKZEo5q/YtLMEdse36ED
*** Script: decrypted: Super Secret Phrase
*/
This way only administrators could really read this data. Also if I recall correctly, the sysevent table is cleared after 7 days. You could have the job remove the event as soon as it has it in memory.
I'm working on an IMAP client using Ruby and Rails. I can successfully import messages, mailboxes, and more... However, after the initial import, how can I detect any changes that have occurred since my last sync?
Currently I am storing the UIDs and UID validity values in the database, comparing them, and searching appropriately. This works, but it doesn't detect deleted messages or changes to message flags, etc.
Do I have to pull all messages every time to detect these changes? How do other IMAP clients do it so quickly (i.e. Apple Mail and Postbox). My script is already taking 10+ seconds per account with very few email addresses:
# select ourself as the current mailbox
#imap_connection.examine(self.location)
# grab all new messages and update them in the database
# if the uid's are still valid, we will just fetch the newest UIDs
# otherwise, we need to search when we last synced, which is slower :(
if self.uid_validity.nil? || uid_validity == self.uid_validity
# for some IMAP servers, if a mailbox is empty, a uid_fetch will fail, so then
begin
messages = #imap_connection.uid_fetch(uid_range, ['UID', 'RFC822', 'FLAGS'])
rescue
# gmail cries if the folder is empty
uids = #imap_connection.uid_search(['ALL'])
messages = #imap_connection.uid_fetch(uids, ['UID', 'RFC822', 'FLAGS']) unless uids.empty?
end
messages.each do |imap_message|
Message.create_from_imap!(imap_message, self.id)
end unless messages.nil?
else
query = self.last_synced.nil? ? ['All'] : ['SINCE', Net::IMAP.format_datetime(self.last_synced)]
#imap_connection.search(query).each do |message_id|
imap_message = #imap_connection.fetch(message_id, ['RFC822', 'FLAGS', 'UID'])[0]
# don't mark the messages as read
##imap_connection.store(message_id, '-FLAGS', [:Seen])
Message.create_from_imap!(imap_message, self.id)
end
end
# now assume all UIDs are valid
self.uid_validity = uid_validity
# now remember that we just fetched all those messages
self.last_synced = Time.now
self.save!
There is an IMAP extension for Quick Flag Changes Resynchronization (RFC-4551). With this extension it is possible to search for all messages that have been changed since the last synchronization (based on some kind of timestamp). However, as far as I know this extension is not widely supported.
There is an informational RFC that describes how IMAP clients should do synchronization (RFC-4549, section 4.3). The text recommends issuing the following two commands:
tag1 UID FETCH <lastseenuid+1>:* <descriptors>
tag2 UID FETCH 1:<lastseenuid> FLAGS
The first command is used to fetch the required information for all unknown mails (without knowing how many mails there are). The second command is used to synchronize the flags for the already seen mails.
AFAIK this method is widely used. Therefore, many IMAP servers contain optimizations in order to provide this information quickly. Typically, the network bandwidth is the limiting factor.
The IMAP protocol is brain dead this way, unfortunately. IDLE really should be able to return this kind of stuff while connected, for example. The FETCH FLAGS suggestion above is the only way to do it.
One thing to be careful of, however, is that UIDs are only valid for a given session per the spec. You should not store them, even if some servers persist them.
I have taken a database class this semester and we are studying about maintaining cache consistency between the RDBMS and a cache server such as memcached. The consistency issues arise when there are race conditions. For example:
Suppose I do a get(key) from the cache and there is a cache miss. Because I get a cache miss, I fetch the data from the database, and then do a put(key,value) into the cache.
But, a race condition might happen, where some other user might delete the data I fetched from the database. This delete might happen before I do a put into the cache.
Thus, ideally the put into the cache should not happen, since the data is longer present in the database.
If the cache entry has a TTL, the entry in the cache might expire. But still, there is a window where the data in the cache is inconsistent with the database.
I have been searching for articles/research papers which speak about this kind of issues. But, I could not find any useful resources.
This article gives you an interesting note on how Facebook (tries to) maintain cache consistency : http://www.25hoursaday.com/weblog/2008/08/21/HowFacebookKeepsMemcachedConsistentAcrossGeoDistributedDataCenters.aspx
Here's a gist from the article.
I update my first name from "Jason" to "Monkey"
We write "Monkey" in to the master database in California and delete my first name from memcache in California but not Virginia
Someone goes to my profile in Virginia
We find my first name in memcache and return "Jason"
Replication catches up and we update the slave database with my first name as "Monkey." We also delete my first name from Virginia memcache because that cache object showed up in the replication stream
Someone else goes to my profile in Virginia
We don't find my first name in memcache so we read from the slave and get "Monkey"
How about using a variable save in memcache as a lock signal?
every single memcache command is atomic
after you retrieved data from db, toggle lock on
after you put data to memcache, toggle lock off
before delete from db, check lock state
The code below gives some idea of how to use Memcached's operations add, gets and cas to implement optimistic locking to ensure consistency of cache with the database.
Disclaimer: i do not guarantee that it's perfectly correct and handles all race conditions. Also consistency requirements may vary between applications.
def read(k):
loop:
get(k)
if cache_value == 'updating':
handle_too_many_retries()
sleep()
continue
if cache_value == None:
add(k, 'updating')
gets(k)
get_from_db(k)
if cache_value == 'updating':
cas(k, 'value:' + version_index(db_value) + ':' + extract_value(db_value))
return db_value
return extract_value(cache_value)
def write(k, v):
set_to_db(k, v)
loop:
gets(k)
if cache_value != 'updated' and cache_value != None and version_index(cache_value) >= version_index(db_value):
break
if cas(k, v):
break
handle_too_many_retries()
# for deleting we can use some 'tumbstone' as a cache value
When you read, the following happens:
if(Key is not in cache){
fetch data from db
put(key,value);
}else{
return get(key)
}
When you write, the following happens:
1 delete/update data from db
2 clear cache