re-order items with meteor 0.8+ - sorting

I want to do a todo list with drag'n'drop like https://github.com/meteor/meteor/tree/master/examples/unfinished/reorderable-list.
The problem is that I don't know how to handle the rank properly. I tried the example above, it works fine until the builded rank does not change any more
So I thought that it would be better to reorder my todo list each time I insert a new task or if I change the rank of one task.
First try on client:
var dropRank=1
Tasks.find({rank:{$gt:dropRank-1}},{fields:{_id:1}}).forEach(
function(task){
Tasks.update(task._id,{$inc:{rank:1}})
})
Tasks.insert({rank:dropRank})
After ~150 tasks, it becomes slow to insert a new task at rank 1 and to reorder the ranks.
2nd try on server (with a Meteor.method or with collection.hook):
Tasks.update({rank:{$gt:dropRank-1}},{$inc:{rank:1}},{multi:true})
After ~150 tasks, I see that the rank slowly updates on client.
If I try it with a local collection, it slow down after 400 tasks.
So the question is: is there a proper way to build a rank so that I can insert a task and display it without updating the other ranks?

Have you tested what's slowing you down: the update of the database or the rewriting of the page? I did a simple replication of your application and found that the update did take some time to update when there were 400 divs writing to the browser page, but when I limited the output of the data context to 50 rows, it felt really snappy.
For another project I'm working on, I found that I had to be pretty careful about how much I asked of the browser when updating the database. It took some testing, and for that project I found that 30 divs was about all I wanted to update at a time.

I gave up about looking for an other way of updating the rank and render everything.
I splited the data in 2 parts:
the static part : to build the first view with #each and reactive:false on collection
the reactive part: a cursor observer that will place new tasks, delete or move the task in the dom when it wasn't the user himself who did it.
I could easily insert new tasks before 500-700 other tasks, I'm satisfied. I tried with 1000 tasks but it was too much.

Related

Flow Triggering Itself(Possibly), Each run hits past IDs that were edited

I am pretty new to power automate. I created a flow that triggers when an item is created or modified. It initializes some variables and then does some switch cases to assign values to each of them. The variables then go into an array and another variable is incremented to get the total of the array. I then have a conditional to assign a value to a column in the list. I tested the flow specifically going into the modern view of the list and clicking the save button. This worked a bunch of times and I sent it for user testing. One of the users edited multiple items by double clicking into the item which saves after each column change(which I assume triggers a run of the flow)
The flow seemingly works but seemed to get bogged down at a point based on run history. I let it sit overnight and then tested again and now it shows runs from multiple IDs at a time even though I only edited one specific one.
I had another developer take a look at my flow and he could not spot anything wrong with it and it never had a hard error in testing only warnings about conditionals causing a loop but all my conditionals rectify. Pictures included. I am just not sure of any caveats I might be missing.
I am currently letting the flow sit to see if it finishes getting caught up. I read about the concurrent run option as well as conditions on the trigger itself. I am curious as to why it seems to run on two records(or more) all at once without me or anyone editing each one.
You might be able to ignore the updates from the service account/account which is used in the connection of the actions by using the following trigger condition expression:
#not(equals(triggerOutputs()?['body/Editor/Claims'], 'i:0#.f|membership|johndoe#contoso.onmicrosoft.com'))

In Firebase Database, how to read a large list then get child updates, efficiently

I want to load a large list 5000 items, say, from a Firebase Database as fast as possible and then get any updates when a child is added or changed.
My approach so far is use ObserveSingleEvent to load the entire list at startup
pathRef.ObserveSingleEvent(DataEventType.Value, (snapshot) =>
and when that has finished use ObserveEvent for child changes and additions:
nuint observerHandle1 = pathRef.ObserveEvent(DataEventType.ChildChanged, (snapshot) =>
nuint observerHandle2 = pathRef.ObserveEvent(DataEventType.ChildAdded, (snapshot) =>
The problem is the ChildAdded event is triggered 5000 times, all of which are usually unnecessary and it takes a long time to complete.
(From the docs: "child_added is triggered once for each existing child and then again every time a new child is added to the specified path)
And I can't just ignore the first update for each child, because it might be a genuine change (eg database updated just after the ObserveSingleEvent completed).
So I have to check each item on the client and see if it has changed. This is all very slow and inefficient.
I've seen posts suggesting to use a query with LimitToLast etc, but this is not reliable as child changes may be missed.
My Plan B is to not do ObserveSingleEvent to load the list and just rely on the 5000 child added calls, but that seems crazy and is slower to get the list initially, and probably uses more of the user's data allowance.
I would think this is a common scenario, so what is the best solution?
Also, I have persistence enabled, so a couple of related questions:
1) If the entire list has been loaded before, does ObserveSingleEvent with Value just load from disk and there's no network use, when online (but no changes have happened since last download) or does it download the whole list again?
2) Similarly with the 5000 calls to child added - if it's all on disk, is there any network use, when online if no changes?
(I'm using C# for iOS, but I'm sure its the same issue for other platforms)
The wire traffic for a value listener and for a child_ listener is exactly the same. The events are purely a client-side interpretation of the underlying data.
If you need all children, consider using only the child_* listeners and no pathRef.ObserveSingleEvent(DataEventType.Value. It'll lead to the simplest code.

Immediately Display New Metrics

I am using graphite and coda hale metrics to try and track the number of times particular API's are called and also the top 10 callers. I have assigned a metric to each user who calls the API and use graphite to bring back the top 10.
The problem is, if it is a new user - ie a new metric, this will only be displayed in Graphite when the tool is refreshed - Has anyone come across a work around for this ? Is there some way Graphite can automatically detect new meters?
Just to be clear - I can see the top ten API callers for the last 30 minutes.........unless it is a brand new user that has never logged in before.
It seems that graphite-web uses an on disk index generated by a glorified find command. Another script is available so you can run it as cron to update the metric index file.
Whenever you update the index file, graphite-web process will detect it and reload it.
Since reloading the index might be heavy for large (1M) number of metrics, I would advise to modify the update script a bit to conditionnaly update the file (only if different for instance).
EDIT: after test, graphite does not seem to call the reloading code

SubSonic AddMany() vs foreach loop Add()

I'm trying to figure out whether or not SubSonics AddMany() method is faster than a simple foreach loop. I poked around a bit on the SubSonic site but didn't see much on performance stats.
What I currently have. (.ForEach() just has some validation it it, other than that it works just like forEach(.....){ do stuff})
records.ForEach(record =>
{
newRepository.Add(record);
recordsProcessed++;
if (cleanUp) oldRepository.Delete<T>(record);
});
Which would change too
newRepository.AddMany(records);
if (cleanUp) oldRepository.DeleteMany<T>(records);
If you notice with this method I lose the count of how many records I've processed which isn't critical... But it would be nice to be able to display to the user how many records were moved with this tool.
So my questions boil down to: Would AddMany() be noticeably faster to use? And is there any way to get a count of the number of records actually copied over? If it succeeds can I assume all the records were processed? If one record fails, does the whole process fail?
Thanks in advance.
Just to clarify, AddMany() generates individual queries per row and submits them via a batch; DeleteMany() generates a single query. Please consult the source code and the generated SQL when you want to know what happens to your queries.
Your first approach is slow: 2*N queries. However, if you submit the queries using a batch it would be faster.
Your second approach is faster: N+1 queries. You can find how many will be added simply by enumerating 'records'.
If there is a risk of exceeding capacity limits on the size of a batch, then submit 50 or 100 at a time with little penalty.
Your final question depends on transactions. If the whole operation is one transaction, it will commit of abort as one. Otherwise, each query will stand alone. Your choice.

Automatically rebuild cache

I run a Symfony 1.4 project with very large amount of data. The main page and category pages are using pagers which need to know how much rows are available. I'm passing a query which contains joins to the pager which leads to a loading-time of 1 minute on these pages.
I configured cache.yml for the respective actions. But I think the workaround is insufficient and here are my assumptions:
Symfony rebuilds the cache within a single request which is made by a user. Let's call this user "cache-victim" to simplify things.
In our case, the data needs to be up-to-update - a lifetime of 10 minutes would be sufficient. Obviously, the cache won't be rebuilt, if no user is willing to be the "cache-victim" and therefore just cancels the request. Are these assumptions correct?
So, I came up with this idea:
Symfony should fake the http-request after rebuilding the cache. The new cache-entries should be written on a temporary file/directory and should be swapped with the previous cache-entries, as soon as cache rebuilding has finished.
Is this possible?
In my opinion, this is similar to the concept of double buffering.
Wouldn't it be silly, if there was a single "gpu-victim" in a multiplayer game who sees the screen building up line by line? (This is a lop-sided comparison, I know ... ;) )
Edit
There is no "cache-victim" - Every 10 minutes page reloading takes 1 minute for every user.
I think your problem is due to some missing or wrong indexes. I've a sf1.4 project for a large soccer site (i.e. 2M pages/day) and pagers aren't going so slow even if our database has more than 1M rows these days. Take a look at your query with EXPLAIN and check where it is going bad...
Sorry for necromancing (is there a badge for that?).
By configuring cache.yml you are just caching the view layer of your app (that is, css, js and html) for REQUESTS WITHOUT PARAMETERS. Navigating the pager obviously has a ?page=X on the GET request.
Taken from symfony 1.4 config.yml documentation:
An incoming request with GET parameters in the query string or submitted with the POST, PUT, or DELETE method will never be cached by symfony, regardless of the configuration. http://www.symfony-project.org/reference/1_4/en/09-Cache
What might help you is to cache the database results, but its a painful process on symfony/doctrine. Refer to:
http://www.symfony-project.org/more-with-symfony/1_4/en/08-Advanced-Doctrine-Usage#chapter_08_using_doctrine_result_caching
Edit:
This might help you as well:
http://www.zalas.eu/symfony-meets-apc-alternative-php-cache

Resources