OpenMesh restore deleted elements - openmesh

According to the documentation, calling delete (delete_face(), delete_vertex(), delete_edge()) on mesh elements only deletes them internally by setting the appropriate status flag. These elements are permanently deleted only when the garbage collector is called. My question is, is it possible to restore deleted and not yet garbage-collected items, in a targeted manner? I guess it is possible to restore all items marked for deletion by resetting their Status attribute, but is it possible to undelete a specific face/vertex/edge by their handle?
It appears to me that simply resetting the Status attribute of the items to be undeleted is not enough because all the connected elements that were set deleted because of the deletion of the item have to be undeleted as well.
Side note: I'm using the term undelete instead of restore because the latter refers to restoring from file in the documentation.
Edit: I am also interested in ways to efficiently undelete all items marked for deletion at once. Smart taggers provide an O(1) way to untag all elements. Is there a way to undelete all elements with O(1) efficiency?

Related

How the UNDO and REDO feature in any TEXT EDITOR is implemented? [duplicate]

Part of my project is to write a text editor that is used for typing some rules, compiling my application and running it. Writing compiler was end and release beta version. In the final version we must add undo and redo to the text editor. I use a file and save it periodically for the text editor. How to design undo and redo to my text editor? What is changed in the structure of persistent of file?
You can model your actions as commands, that you keep in two stacks. One for undo, another for redo. You can compose your commands to create more high-level commands, like when you want to undo the actions of a macro, for example; or if you want to group individual keystrokes of a single word, or phrase, in one action.
Each action in your editor (or a redo action) generates a new undo command that goes into the undo stack (and also clears the redo stack). Each undo action generates the corresponding redo command that goes into the redo stack.
You can also, as mentioned in the comments by derekerdmann, combine both undo and redo commands into one type of command, that knows how to undo and redo its action.
There are basically two good ways to go about it:
the "Command" design pattern
using only OO over immutable objects, where everything is just immutable objects made of immutable objects made themselves of immutable objects (this is less common but wonderfully elegant when done correctly)
The advantage of using OO over immutable objects over the naive command or the naive undo/redo is that you don't need to think much about it: no need to "undo" the effect of an action and no need to "replay" all the commands. All you need is a pointer to a huge list of immutable objects.
Because objects are immutable all the "states" can be incredibly lightweight because you can cache/reuse most objects in any state.
"OO over immutable objects" is a pure jewel. Probably not gonna become mainstream before another 10 years that said ; )
P.S: doing OO over immutable objects also amazingly simplifies concurrent programming.
If you don't want anything fancy, you can just add an UndoManager. Your Document will fire an UndoableEdit every time you add or remove text. To undo and redo each change, simply call those methods in UndoManager.
The downside of this is UndoManager adds a new edit each time the user types something in, so typing "apple" will leave you with 5 edits, undoable one at a time. For my text editor, I wrote a wrapper for edits that stores the time it was made in addition to text change and offset, as well as an UndoableEditListener that concatenates new edits to previous ones if there is only a short period of time between them (0.5 seconds works well for me).
This works well for general editting, but causes problems when a massive replace is done. If you had a document with 5000 instances of "apple" and you wanted to replace this with "orange", you'd end up with 5000 edits all storing "apple", "orange" and an offset. To lower the amount of memory used, I've treated this as a separate case to ordinary edits and am instead storing "apple", "orange" and an array of 5000 offsets. I haven't gotten around to applying this yet, but I know that it'll cause some headaches when multiple strings match the search condition (eg. case insensitive search, regex search).
Wow, what a conicidence - I have literally in the last hour implemented undo/redo in my WYSIWYG text editor:
The basic idea is to either save the entire contents of the text editor in an array, or the difference between the last edit.
Update this array at significant points, i.e. every few character (check the length of the content each keypress, if its more than say 20 characters different then make a save point). Also at changes in styling (if rich text), adding images (if it allows this), pasting text, etc. You also need a pointer(just an int variable) to point at which item in the array is the current state of the editor)
Make the array have a set length. Each time you add a save point, add it to the start of the array, and move all of the other data points down by one. (the last item in the array will be forgotten once you have so many save points)
When the user presses the undo button, check to see if the current contents of the editor are the same as the latest save (if they are not, then the user has made changes since the last save point, so save the current contents of the editor (so it can be redo-ed), make the editor equal to the last save point, and make the pointer variable = 1 (2nd item in array ). If they are they same, then no changes have been made since the last save point, so you need to undo to the point before that. To do this, increment the pointer value + 1, and make the contents of the editor = the value of pointer.
To redo simply decrease the pointer value by 1 and load the contents of the array (make sure to check if you have reached the end of the array).
If the user makes edits after undoing, then move the pointed value array cell up to cell 0, and move the rest up by the same amount (you dont want to redo to other stuff once they've made different edits).
One other major catch point - make sure you only add a save point if the contents of the text editor have actually changed (otherwise you get duplicate save points and it will seem like undo is not doing anything to the user.
I can't help you with java specifics, but I'm happy to answer any other questions you have,
Nico
You can do it in two ways:
keep a list of editor states and a pointer in the list; undo moves the pointer back and restores the state there, redo moves forward instead, doing something throws away everything beyond the pointer and inserts the state as the new top element;
do not keep states, but actions, which requires that for every action you have a counteraction to undo the effects of that action
In my (diagram) editor, there are four levels of state changes:
action fragments: these are part of a larger action and not separately undoable or redoable
(e.g. moving the mouse)
actions: one or more action fragments that form a meaningful change which can be undone or redone,
but which are not reflected in the edited document as changed on disk
(e.g. selecting elements)
document changes: one or more actions that change the edited document as it would be saved to disk
(e.g. changing, adding or deleting elements)
document saves: the present state of the document is explicitly saved to disk - at this point my editor throws away the undo history, so you can't undo past a save
This is a job for the command pattern.
Here is a snippet that shows how SWT supports Undo/Redo operations. Take it as practical example (or use it directly, if your editor is based on SWT):
SWT Undo Redo
Read a book Design Patterns: Elements of Reusable Object-Oriented Software. As far as I remember, there is a pretty good example.

Possibility of stale data in cache-aside pattern

Just to re-cape cache-aside pattern it defines following steps when fetching and updating data.
Fetching Item
Return the item from cache if found in it.
If not found in cache, read from data store.
Put the read item in cache and return it.
Updating Item
Write the item in data store.
Remove the corresponding entry from cache.
This works perfectly in almost all cases, but it seems to fail in one theoretical scenario.
What if step 1 & 2 of updating item, happen between step 2 & 3 of fetching item. In other words, consider that initially data store had the value 'A' and it was not in cache. So when fetching item, we read 'A' from data store but before we put into the cache, the item was updated to 'B' in another thread (So 'B' was written in data store and tried to remove the entry from cache, which was not there at that time). Now when the fetching thread puts the item it read (i.e. 'A') in cache. So now 'A' will stay cached, and further fetches will return stale data, until item expires or updated again.
So am I missing something here, is my understanding of pattern is wrong. Or that the scenario is just practically impossible, and there is no need to worry about it.
Also I would like to know if some changes can be made in the pattern to avoid this problem.
Your understanding of the pattern appears perfectly correct, according to the MSDN definition. In fact, it mentions the same failure scenario that you describe.
The order of the steps in this sequence is important. If the item is removed before the cache is updated, there is a small window of opportunity for a client application to fetch the data (because it is not found in the cache) before the item in the data store has been changed, resulting in the cache containing stale data.
The MSDN article does note that, "it is usually impractical to expect that cached data will always be completely consistent with the data in the data store." Expiration and eviction are two strategies mentioned for dealing with this problem.
An old computer science joke goes like this.
There are only two hard problems in computer science: cache invalidation, naming things, and off-by-one errors.
You've stumbled upon the first of these problems.
Also I would like to know if some changes can be made in the pattern
to avoid this problem.
There is no way to avoid this situation in general. Memcached protocol introduces a special command:
"cas" is a check and set operation which means "store this data but
only if no one else has updated since I last fetched it."
Scenario should be modified:
Fetching Item
Return the item from cache if found in it.
If not found in cache, read from data store.
Check and swap the corresponding entry in cache and return it.
Updating Item
Check and swap the corresponding entry in cache.
Write the item in data store.
This scenario also does not guarantee full consistency.
Imagine the following situation:
Writing item in data store fails, while updating item in cache succeed. The latest item value will be kept in cache only.

Design a share, re-share functionality for a website, avoiding duplication

This is an interesting interview question that I found somewhere. To elaborate more:
You are expected to design classes and data structures for some website such as facebook or linkedin where your activity can be shared and re-shared. Design should be such that it avoids redundancy and duplication.
While thinking of this problem I was stuck on "link vs copy" problem as discussed here
But since the problem states that duplication should be avoided I decided to go "link" way. This makes sharing/re-sharing easier but deleting very difficult. i.e. if the original user deletes their post all the shares should be deleted. (programmatically speaking all the objects on the pointing to the particular activity should be made null. And this is the difficult part here, i.e. to find all the pointing objects)
Wouldn't it be better to keep the shares? The original user deletes
their post, fine, it's gone. But everyone who has linked to it should
not suddenly have it disappear on them.
This could be done the way Unix handles hard links. "Deleting" just
means removing one link to an object -- an inode, in Unix terms. You
don't remove the object itself until the link count is zero.
It's not obvious from the original specification that deletion should work as you describe. It might be desired that when the original user deletes the item, it is not deleted elsewhere; in that case you don't necessarily need to track all references, just keep a reference count on each post, and remove it from the database only when the count hits zero.
If you do want the behavior you describe, it may be achievable by simply removing broken links as and when you encounter them, again relieving you of the need to track each reference. The cost of tracking and updating every reference to every post is replaced with the comparable cost of one failed lookup for each referring page. The latter case is simpler to implement, though, and the cost doesn't hit your server all at once.
In real life, I would implement all references as bidirectional anyway, because it's likely to be needed sooner or later as you add features. For example, a "like" counter seems pretty simple, but to prevent duplicate votes you need to keep track of who has liked each item, and then if you want to remove their "like" when they delete their profile, you need to keep a list of each user's outbound "likes" too.
It takes a lot of database activity to implement something like Facebook...

Hash table entry linked list - Mutex locks for thread safe operation

Would a Win32 Mutex be the most efficient way to limit thread access to a linked list in a hash table? I didn't want to create a lot of handles, and the size of the hash table is variable. It could potentially be thousands. I didn't want to lock the whole list down when only one entry's list is being changed, so that would call for multiple Mutexes (one per each list), but I figured I could probably get away with pooling about 20 Mutex handles and reusing them since there shouldn't be that many threads accessing it simultaneously. Is there an alternative to Mutex locks for this case?
A lot here depends on the details of your hash table. My immediate reaction would be to avoid a mutex/critical section at all, at least if you can.
At least for adding an item to the linked list, it's pretty easy to avoid it by using an InterlockedExchangePointer instead. Presumably you have a struct something like:
struct LL_item {
LL_item *next;
std::string key;
whatever_type value;
};
To insert an item of this type into a linked list, you do something like:
LL_item *item = new LL_item;
// set key and value here
item->next = &item;
InterlockedExchangePointer(&item->next, &bucket->head);
Prior to the InterlockedExchangePointer, bucket->head contains the address of the first item currently in the list. We initialize our new item with its own address in its next pointer. We then (atomically) exchange the next pointer in our new item with the pointer to the pointer to the (previous) first node in the list. After the exchange, the new node's next pointer contains the address of the previously-first item in the list, and the pointer to the head of the list contains the address of our new node.
I believe you can (probably) normally use an exchange to remove an item from a list as well, but I'm not sure -- I haven't thought through that quite as thoroughly. Quite a few hash tables don't (even try to) support deletion anyway, so you may not care about that though.
I'd suggest a slim reader writer lock. Sure, it locks the entire data structure when you're doing updates, but typically you'll have a lot more reads than writes to the hash table. My experience with SRW locks is that it works quite well and performance is very good. You probably should give it a try. That'll get your program working. Then you can profile the code to determine if there are bottlenecks and if so where the bottlenecks are. It's quite possible that the SRW lock is plenty fast enough.

Implementation of Unsaved Changes Detection

It seems like three ways to approach detecting unsaved changes in a text/image/data file might be to:
Update a boolean flag every time the user makes a change or saves, which would result in a lot of unnecessary updates.
Keep a cached copy of the original file and diff the two every time a save operation needs to be checked.
Keep a stack of all past operations and push/pop operations as needed, resulting in a lot of extra memory usage.
In general, how do commercial applications detect whether unsaved changes exist and what are the advantages/disadvantages of each approach? I ran into this issue while writing a custom application that has special saving behavior and would like to know if there is a known best practice.
As long as you need an undo/redo system, you need that stack of past operations. To detect in wich state the document is, an item of the stack is set to be the 'saved state'. Current stack node is not that item, the document is changed.
You can see an example of this in Qt QUndoStack( http://doc.qt.nokia.com/stable/qundostack.html ) and its isClean() and setClean()
For proposition 1, updating a boolean is not something problematic and take little time.
This depends on the features you want and the size/format of the files, I guess.
The first option is the simplest, and it gives you just what you want with minial overhead.
The second option has the advantage that you can detect when changes have been manually reverted, so that there is no real change after all (although that probably doesn't happen all too often). On the other hand it is much more costly to make a diff just to check if anything was modified. You probably don't want to do that everytime the user presses a key.
The third option gives the ability to provide an undo-history. You could limit the number of items in that history by grouping changes together that were made consecutively (without moving the cursor in between), or something like that.

Resources