W3C is proposing MutationObserver APIs to replace Mutation Event APIs. More info here : http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#mutation-observers.
I have a newbie question about the new APIs
In the MutationRecord, what's the purpose of previousSibling and nextSibling? Where do they point in case of multiple addedNodes and removedNodes?
If there are multiple added nodes and removed nodes, how to determine the order in which they happened?
Can the same node be in addedNodes and removedNodes, e.g. a node get added and immediately removed? If yes, can the same node appear multiple times in any category, e.g. node got added, removed and added again? If yes, #2 question above becomes more relevant.
FYI, these APIs just showed up on Firefox and Webkit nightly builds (in addition to being present in Chrome).
Thanks, Sunil
I found a discussion here which provides some of the answers : http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1622.html.
Related
I have a question regarding : substrate-node-template (https://github.com/substrate-developer-hub/substrate-node-template).
I have run the node together with the frontend and I noticed there is some activity on the blockchain event without interaction from the user. Where is this activity configured in the source code?
What are current and finalized blocks? Can anyone explain please?
Node template generates blocks even if there is no transactions happening.
I would encourage you to go through this section of the Knowledge Base to understand how consensus and block generation works https://substrate.dev/docs/en/knowledgebase/advanced/consensus#consensus-in-substrate
EDIT: if you are curious in how you can make your node to only generate blocks when there are transactions happening this is a good resource.
Now, as a short answer for what are current and finalized blocks, the ones under the current/best name are the ones that have been authored. And the finalized ones, are the ones which the consensus mechanism consider final.
You can find a formal definition of the protocols in web3 foundation research page.
Block Production
Finality
I have created the following view in Ganglia, showing cpu_user stats:
Can someone tell me what Sintr means? I was not able to find any information on Google or stackexchange websites.
Interestingly, I have two servers with identical hardware that I'm monitoring, but only one of them has the Sintr entry (which caught my eye).
Okay, I found an answer hidden in some Ganglia dev mailing list...
From this post:
I also added two specific metrics to Linux. cpu_intr and cpu_sintr
count the number of cycles spent on hard/soft interrupts.
Still wondering why it's only shown for one server and not for the other, but that's another story.
hi folks,
We use StormCrawler with elasticsearch to make an index of our homepage which consist of "old pages" and "new pages".
My Question in short:
If two pages A(old),B(new) link to page X, how to pass metadata from B to X?
My Question in long:
We relauched our homepage step by step. So at time we have pdf-Files which are reachable via only the old html-pages, via only the new html-page or on both ways.
For "order by" purpose we must mark all pdf-Files which are reachable by the new html-pages.
So we insert "newHomepage=true" to seeds.txt and "metadata.transfer/-newHomepage" to "crawler-conf.yaml": Fine :-)
But for the pdf-Files which are reachable from old !and! new html-pages, we now have a race condition: If our pdf-File is "DISCOVERED" from an old page this information (newHomepage=false) is in Status-Index and can not be overridden.
( StatusUpdaterBolt does not override documents, IndexerBolt does override by default).
To make the thinks more complicate: in our case a URL (at html-page) to a PDF is redirected two times, before the file is delivered.
So from my point of view we have two possibilities:
Start the crawler two times. First we only index our new pages (and all reachable pdf files), second we index our old pages.
--> Problems with new pages which are changed after crawler was started
Store "outbound_links" and use them to set "newHomepage" independently from the crawler
--> short times with wrong metadata in index
Any advice or other ideas?
Best regards
Karsten
thanks for sharing your problem and great to hear that you are using SC. This is an interesting and unusual use case.
Your analysis of the problem is correct. An intuitive approach would be to extend the default StatusUpdaterBolt so that it updates the metadata if a document already exists. You'd need to remove the part that does the check on whether the doc has a status of DISCOVERED.
This would slow things down, but since you are dealing with a single website, this should not have a massive impact.
You could push the logic even further by setting a new nextFetchDate if the document had been fetched so that it gets refetched and updated quicker in the doc index (as opposed to the status one).
How can I bind an issue to another issue, such that it will trigger the other issue to do something?
In the upper example, when "explicit device" issue is moved to "finished" column, I want the "error handling"(leftmost) issue to move to "in progress" column automatically. Because I may not remember which needs what and what was which, needing to check all issues whenever an issue is finished and would become tiring after some point.
Even better, building an issue tree, finishing from ground up without stopping by all issues for just finding the closest root of an issue, isn't an option?
Another example:
add method is written: "issue1" complete
multiply method is written: "issue2" complete
suddenly, a multiply-add method as "issue3" pops in the beginning column or if it is already there, it moves to right by 1 column.
The notion of project board presented in GitHub Universe 2016 is still lacking in term of fine-grained issue management.
That is why you have so many third-party integrations, including ZenHub (free for small teams and public account), which does have more features.
The point is: look for third-party integration (with a free offer) for your feature.
I look that the signature of umbraco.content.AfterUpdateDocumentCache event uses umbraco.cms.businesslogic.web.Document object. Unfortunatelly it is deprecated in "Umbraco 7".
What is the new event?
I'm the same issue in umbraco.content.AfterClearDocumentCache event.
Thanks
It doesn't appear there's any analog for umbraco.content.AfterUpdateDocumentCache in the umbraco7 code.
It seems you may have to reconsider you implementation approach to the available events hanging off Umbraco.Core.Services.ContentService
Looking at the u7 implementation of ContentService.Publish, for example, this call the internal SaveAndPublishDo which shows that the PreviewXML and the ContentXML disc caches are called before firing the Saved and Published (via Umbraco.Core.Publishing.PublishingStrategy) events. I presume the old umbraco.content.AfterUpdateDocumentCache was a single event that happened after both of the aforemented events. In it's absence - i believe you may have to watch for the saved/published/deleted events separately.
I can see that there are a bunch of events that would cause the cache update and it'd be a pain to wire them up separately - but maybee a different approach specific to the granularity of the available events is an upgrade?!
It may also help to backtrack from Umbraco.Core.Cache.CacheRefresherBase where i see there are events like OnCacheUpdated. They exist end do fire - though i'm not sure if or where they are publicly exposed.
This is probably more appropriate as a comment (i need more pts) as it's not a 100% resolution to your question. Hopefully it may be helpful to nudge in the right direction?
http://issues.umbraco.org/issue/U4-3462
According to this thread, answered by members of the Umbraco team, the AfterUpdateDocumentCache should still be used and the deprecated parameters are safely ignorable
I decided to use AfterUpdateDocumentCache in Umbraco 7 but noticed two issues. First of these is double firing this event. Second problem is that I retrieve the same not modified content when rendering page just in this event.
Then I decided to use CacheRefresherBase and CacheUpdate event but still have the same problem. Probably due to additional cache refreshing propagation.
The only workaround I see is to use Thread.Sleep in the new Task and perform purging url a little bit later.