I'm learning React+Redux and I don't understand the proper way of doing the animations. Lets speak by example:
For instance, I have a list and I would like to remove items on click. That's super easy if I have no animation effects there: dispatch REMOVE_ITEM action on click, reducer removes the item from the store and react re-renders html.
Let's add an animation of deleting the line item on click. So, when user clicks on an item I want to run a fancy effect of line item removal and... how? I can think of several ways how to do it:
1) On click I dispatch REMOVE_ITEM action, then reducer mark an item as goingToBeDeleted in Store, then react renders that element with a class of .fancy-dissolve-animation and I run a timer to dispatch the second action REMOVE_ITEM_COMPLETED. I don't like this idea, because it's still unclear how to add JS animations here (for example, with TweenMax) and I run a JS timer to re-render when CSS animation ends. Doesn't sound good.
2) I dispatch ITEM_REMOVE_PROGRESS actions with an interval of ~30ms, and store holds some "value" which represents the current state of animation. I don't like it too, as it would require me to copy the store ~120 times for ~2 seconds of animation (say, I want smooth 60 fps animation) and that's simply a waste of memory.
3) Make an animation and dispatch REMOVE_ITEM only after animation finishes. That's the most appropriate way I can think of, but still I'd like to have things changed in store right after user makes the action. For example, animation may take longer than few seconds and REMOVE_ITEM might sync with a backend – there's no reason to wait animation finish to make a backend API call.
Thanks for reading – any suggestions?
React has a great solution to this problem in the ReactCSSTransitionGroup helper class (see https://facebook.github.io/react/docs/animation.html). With this option, React takes care of it for you, by keeping the DOM state for the child as it was at the last render. You simply wrap your items in a ReactCSSTransitionGroup object. This keeps track of its children, and when it is rendered with a child removed, instead of rendering without the child, it renders with the child, but adds a CSS class to the child (which you can use to trigger a CSS animation, or you can just use CSS transitions for simplicity). Then, after a timeout (configured as a prop passed to ReactCSSTransitionGroup), it will re-render again, with the child removed from the DOM.
To use ReactCSSTransitionGroup, you'll need to npm install react-addons-css-transition-group, and then require/import 'react-addons-css-transition-group'. The animation docs give more detailed information.
One thing to remember - make sure the children have unique, unchanging keys. Just using the index as the key will make it behave incorrectly.
Instant actions are problematic in redux which saves state, so if we send action and it will change store then this change in store is available in next states, so We can have situation where animation is showing over and over because in store such parameter was set.
My solution for redux instant actions is to add some id like ( Action example code ):
{
type:"SOME_ANIMATION",
id: new Date().getTime() //we get timestamp of animation init
}
Next in component which runs animations save last animation id and if its match don't do animation. I use component state so for example ( Component code):
componentDidUpdate (){
if (this.lastAnimationId===this.props.animation.id)
return; //the same animation id so do not do anything
//here setState or do animation because it is new one
this.lastAnimationId=this.props.animation.id; //here set new id of last abnimation
}
Thanks id we can have only one action without actions which are reversing the state. Reversing actions after timeout can cause problems because if other action ( which is connected with component ) will be send before reverse action then animation can start again.
Minuses of proposed by me approach are that animation data exists in state, but exists also animation id which give us information about it. So we can say that store saves last dispatched animation.
I have margin and inside it is a frame that contains a few lines for signatures, but I want it to only be visible on the last page.
So I set it to print on last page, horizontal to variable; but I thought I'd need to add a format trigger on the frame inside the margin. What should I write in pl/sql to achieve this?
If you want a frame to only print on the last page, don't put it in the margin. Instead, put it below all other frames and (as you already have) set it to print only on the last page.
Whatever you put in Margin is always displayed on all pages. Simply move your signature frame to the main layout, below your main group frame. And set it to display on last page only. I assume your signature query is separate query that may or may not be related to other queries... You may keep it inside the main group frame making signature frame variable, for example. It all depends on your design and requirements.
I get it now, there is button in toolbar edit margin, and u can edit everything inside margin. so i put frame inside it and in properties change last page and it worked as i wanted to work. THanks on your replies.
There are some controls on windows phone that behave different on the first interaction with them than on subsequent ones. e.g. a button control takes about 3-5 seconds to initialize the required action the first time the button is pressed, however on subsequent clicks it works immediately.
Another usercontrol that adjusts its height based on the key press doesn't adjust properly the first time, however the second time it works.
Is there a way to either prepare the controls, i.e. set them in a ready state so that all the clicks behave the same, or can first click can be faked to bypass this annoying behaviour?
Also what is causing this problem?
NB:- I am testing on a Lumia 520 device.
Unfortunately There is no way to prepare the controls. Nokia Lumia 520 comes in Lower Memory Device So Its behaviour seems slow at loading first time in memory and there are so many Background tasks also runnig at a same point of time. You should try it in Higher Memory Device and see the Behaviour.
I found out from this app performance document why it was happening , http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff967560(v=vs.105).aspx#BKMK_Applicationstartup.
I have a loading panel that is set to collapsed by default and only set visible once the button is clicked. According to the document, elements in collapsed state arent added to memory, so this means the first time it needs to initialize the panel and it doesnt need to in subsequent tries.
The other UI control behaving weirdly was also due to its parent's height not being adjusted after its own height is adjusted the first time, so adjusting the parent height as well fixed it.
I have an audio recording application for Windows Phone. It consists of a pivot control with two pivot items. One is for recording control, and another one is for reviewing and listening the recorded items.
When the recording is taking place, I need the way to prevent the user from navigating away from the current pivot item, but to retain the feel that an entire pivot item moves, but doesn't flip to the next item, as if there is none.
I know I could use GestureListener from Silverlight Toolkit, but using it I will need to implement a simulation of pivot movement myself.
Is there a build-in way to prevent pivot navigation?
If no, can you point me to an example on how I can animate control movement on gesture flipping?
Is it mandatory that the user has to remain on the one PivotItem?. If not, you could just disable the second PivotItem so that the user knows that it's there, but can't actually interact with it.
secondPivotItem.IsEnabled = false;
Alternatively, you could dynamically insert the second PivotItem when you want it and remove it when you don't. For example, when recording:
mainPivot.Items.Remove(secondPivotItem);
then when you want the second PivotItem to appear:
mainPivot.Items.Add(secondPivotItem);
The only "problem" with this is that when you only have one PivotItem on screen, the user can't scroll. However, this is how a Pivot control is supposed to function.
If you really want the user to scroll back to itself, you could create a blank PivotItem (with no header). Then, handle the Pivot's LoadingPivotItem event. Check if the item that it about to be loaded is the blank one. If so, then use Pivot.SelectedItem = recordingPivotItem to navigate back to the recording PivotItem. You can then use the above method to dynamically add the second PivotItem when the recording is over. This isn't the normal UX for pivots, but should do what you're trying to achieve.
Seems to me that the best solution is making the pivot control invisible to hit test altogether. I simply set PivotMain.IsHitTestVisible = false and set it back to true whenever I am done recording.
There is a good attached property approach on how to make a particular element hit test visible, while casting an entire panorama or pivot item hit test invisible:
Here is the link to a blog post of an author with the source code:
http://blogs.msdn.com/b/luc/archive/2010/11/22/preventing-the-pivot-or-panorama-controls-from-scrolling.aspx
Works for me until the dynamic loading and removing of the pivot items with textblock header will be added to the SDK's pivot control.
The down side of locking a person into a pivotitem or disabling one so that a person cannot navigate is that you are going to frustrate the user. PivotItems are meant to be flicked to and from, and writing an app that has behavior different than this is going to take away from the user experience, because the app is not going to behave the way they expect it to.
Personally, if you are going to lock them into one pivot item, I think you should go ahead and create another page without a Pivot control and navigate to it. Also, whether you choose to do it this way or not, you need to keep in mind that regardless of whether they are locked into a certain pivotitem or they are navigated to another page, the back button must work as expected, or the app won't pass certification.
Working on an application that controls a remote robot where there is the potential for significant delay between pressing a button and that action actually happening. Furthermore, there is the potential that the command did not successfully reach the intended recipient after all (due to network unreliability, etc.). Additionally, there are variables in play whose changes are not instantaneous. For instance, there is a variable both for commanded speed as well as current speed; changing the commanded speed will not immediately make the current speed match that value.
The question is, how do I make the application reflect both the current states the remote robot is reporting, as well as acknowledging to the user that his command was understood by the application, but the system has not yet received notification from the robot that it has been acknowledged? (Popups are an absolute no-go.)
Some ideas that have been discussed:
Disable Buttons
When a command button is pressed, start a timer for some reasonable number of seconds and disable the button during that time. Don't update corresponding label directly, but instead wait for a response from the robot. (e.g. if you press a Speed + button, and to the right is a text label showing current speed, don't immediately change the label but instead wait for a response from the robot). Once this response occurs, or when the timer expires, re-enable the button.
Pros: No additional control widgets needed on page. Labels always reflect current state of the robot.
Cons: If you wanted to send two speed updates in a row, would have to wait until first had been received and acknowledged. Would feel sluggish and unresponsive.
Logging Info
Have a log that users can view that shows a textual representations of all the actions the user has taken, timestamped, and with a history clearly visible. Could be color coded based on user preferences.
Pros: User has immediate feedback that his command was understood, as it appears in the log
Cons: Does not resolve problem of what to do with button (especially radio button) behavior.
Does anyone have experience with building UIs for environments in which there is significant latency between action and response? I would appreciate any and all input.
I would not go for a Log: your main focus is with the Widgets. There are several techniques for reporting status for different components, I will discuss a simple one here:
A button has an status-icon next to it show it's status. Use different colors to denote the latency. Green means "ready", when the user clicks the button, the icon changes to "orange" and that indicates busy. When the user clicks again, the color changes to "red", which means queued. When the queue is empty, the color changes back to orange. If the action was executed, the color is changed to green.
A slider can be used for floating values: use two "sliders". The first is better visible, and can be dragged. The second, and a layer below the first, is the "actual reported value", which shows the latency.
Textual input can also use the green/orange status-icons. While editing, the color changes to orange. If your queue/networking protocol supports canceling editing actions, you can resend the new string every time the user presses a key. If not: change the icon to orange upon a change, send the string, and wait for a status report. The status report should contain the actual value, and if this actual value is equal to the value in the component then change the icon to green. If the the actual is not equal to the value in the component then resend the value in the component.
Radiobuttons/Checkboxes should have a double-display. One editable, one uneditable. The first is for user-input and the second is for actual reported status. Same behavior as the slider component.
These require custom components, or widgets, to be made. You can extend original components or recreate them from scratch.
If your robot can also be "steered":
Create a rectangle which can be dragged upon. The rectangle has a small cross painted on, of the current value. While you drag, you see the latency of the cross. You can use interpolation and time values to smooth out robot control. A user will notice the lag because the cross "follows" his mouse-pointer. (Often used in space shooter simulation games when controlling virtual ships, Allegiance for instance)
It sounds like you have the following information that needs to be communicated to the user:
Current state or value of an attribute of the robot.
Target (i.e., received) value the robot is seeking.
Commanded value sent by the user to the robot.
Status of each command (pending, received, achieved, timed-out).
There also a couple other considerations:
Continuous or discrete feedback. Do you have continuous real-time feedback of the current value from robot? Or is it discrete feedback, where the robot sends the current value only after achieving a target? Obviously continuous is better for the user, since it allows the user to distinguish between the robot being slow and being stuck, but if you don’t have it, you have find a way to live without it.
Synchronous or asynchronous command-sending. If the commands are sent synchronously (i.e., no new one is sent out until the last one is known to be received), then the user may need a means to (a) force out the next command without waiting for a reply from the previous (in case the reply was lost), and (b) cancel a queued command in case conditions change between when a command was created and sent.
Robot with or without conflict resolution. Does the robot have the logic to look ahead in the list of commands it has received and resolved conflicting commands? For example, if it’s at a stop and its received-commands queue includes a command to go 5 m/s followed by a command to go 2 m/s, is it smart enough to delete the 5 m/s command? Or will it first attempt to accelerate to 5 m/s then 2 m/s, possibly resulting in an overshoot? Will it wait until it achieves 5 m/s before it even “looks at” the 2 m/s command? A lack of conflict resolution complicates your UI because the users may have to track all commands sent to understand why the robot is behaving like it is.
Integrated Information with Position-coding Controls
Let’s assume that the robot has conflict resolution and asynchronous command-sending. Rather than have separate controls for commanded, targeted and current values, I recommend you integrate them all in a single control to make it easy for the user to compare current, target, and commanded values, and see discrepancies. Perhaps the best way to do this is by representing values by positions in the window. Such position-coding of values is unmatched in showing the relations among things. There are two standard GUI controls, radio buttons and sliders, that accomplish such position coding. However, you’ll have to augment them to fulfill for your purposes. The best usability could require custom-made position-coding controls where you schematically represent the robot and maybe the environment, and allow the user to control it through direct manipulation. However, I'm assuming you need a simple-to-develop implementation, and a well-laid-out combination of sliders and radio buttons may get you pretty close to this ideal.
Use radio buttons for setting a categorical value and a slider for setting a numeric value. The slider may include a text box to allow the user to fine-tune the value. You augment each of these controls so that they show the commanded, target, and current values at the same time. The “usual” indicator (the dot for the radio button and the handle for the slider) represents the commanded value while separate graphic pointers indicates the current and target values. Discriminate the current from the target by making the current more prominent. I’d design them such that they merge into a single pointer when they are at the same value in order to minimize clutter for the usual state of things. If your users are untrained on the system you may want to include text labels on the pointers (“current,” and “target” when target is different than current).
Using these position-coding controls makes it easy for the user to compare current, target, and commanded values and see discrepancies. The status is implicit in the relative positions of the indicators. When the target pointer moves to the commanded position, the user knows the command was received. When current pointer is at the target, the robot has achieved the commanded value. This is especially good for continuous feedback of numeric values because the users can not only see the difference between the current and target value on a slider, they can estimate how long it will take for it achieve the target by seeing how fast the pointer is closing on the target. For the radio buttons, you can include “X% Done” text by the Target pointer to indicate when the target will be achieved (if this information is available).
For the most responsive UI, changing a value of a slider or radio buttons should send an immediate command. There is no “Apply” button. Users can re-send a commanded value at any time by re-clicking on the appropriate slider position or radio button. I think you'll find this is a natural human tendency anyway when confronted with an apparently unresponsive control (consider elevator buttons).
The descrepency between the target pointer and commanded indicator may be too subtle to signal a lack of reception of a commmand if responses are commonly slow (over a few seconds, such that user attention has likely shifted elsewhere). If that is the case, you may want to include a modeless alert after a time-out period that almost certainly indicates the command was lost in transmission. A modeless alert may include a text annunciator beside the control and/or graphically highlighting the commanded-target descrepancy. Depending on criticality, you may want to use a audible alert like a beep or animation to speed capturing user attention. The modeless alert disappears automatically when the target value matches the commanded value for whatever reason.
Separate Controls for Commanded and Current
If sliders and radio buttons take too much space for your purposes (or have other issues), you can go with separate non-position-coding controls for commanded and current values, as implied by your Disable Buttons design. However, overall, this is a more challenging design with more issues to resolve.
I would favor field controls like text boxes, check boxes, and dropdown menus, rather than command buttons so that the commanded value is clearly shown. Continuous numeric attributes may include spinner buttons with the text box if it doesn’t end up clogging the queue with incremental commands. As with the above option, changing a value sends an immediate command.
You’re right to be concerned about the using a timer and disabling. In addition to the problem of making the system sluggish, it means you gray out the commanded state. That can make it hard to read, and also requires some mental gymnastics by the user to interpret (“it’s unavailable, so that means I already selected it”). The interpretation can also be ambiguous because often disabled means Not Applicable (e.g., the Speed control is disabled because the robot has lowered stabilizers for fixed-base operation).
The solution is to use some other graphic feedback than disabling. I’d stay away from color coding. Color coding tends to be arbitrary and thus confusing (e.g., does red mean queued or timed-out?). This may be one of the (rare) good places to use animation since animation is intuitive for representing an on-going process. A flashing or throbbing border (or other feature) for the commanded-value control can indicate a sent command is awaiting reply. A flashing/throbbing border for the current-value control indicates the command is received and the robot is seeking the target value. If animation would be distracting in this situation (like it is for most other situations), then consider a dashed border (versus solid) to indicate awaiting reply or seeking target; dashed suggests a tentative or transitory state.
The target value and status are implied by what is animated. If the commanded-value border is animated, the value inside is a pending –the reply is yet to be received. If the current border alone is animated, the value inside the commanded control is the target value. If both borders are animated, then the robot is seeking one (unspecified) target value, while another is pending. If you think it’s problematic to leave the target unspecified in such circumstances, then you may need three controls to discriminate commanded, target, and current. However, if this is an edge case, it may be better to display the target value on mouse-over of current value control or with a drop-down button.
If feedback is continuous, you can also update the current value of numeric attributes at regular increments (about every 200 to 500 ms) so this animation provides an additional feedback of the robot seeking the target. For any attribute, if it takes 10-15 seconds or more for the robot to reach common targets and the robot has conflict resolution, you may want to also display a progress bar within or beside the current attribute control so the user can judge when the robot will achieve the target value.
To re-send the command, users can always re-select a value, or hit Enter while focus is on the commanded control. That’s a little odd and awkward for some controls (e.g., checkboxes), so I’d also consider a modeless notification (not a popup) that appear near the control if the command times out for a reply. The notification includes a button to resend the command.
If your users are untrained on the system you may want to include redundant text under the animation (e.g., “Sending” when waiting for a reply and “Seeking” when moving towards the target value).
Log Table
The logging approach is probably best if commands are sent synchronously and/or the robot lacks conflict resolution. This way the user can track the command queue for either sending commands or the commands received by the robot in order to predict robot behavior. However, I wouldn’t make it a read-only text box, but rather a table that can be manipulated. While the table is sorted by default by timestamp, there will be separate sortable fields for the attribute, the commanded value, the status (pending, seeking, achieved, timeout). If feedback is continuous, then the status should indicate progress towards achieving the target value (e.g., percent, or a progress bar).
If there is synchronous command-sending, then users can edit the commanded values of pending commands or force forward, move, or delete pending commands. In any case, commands can be copied and re-inserted in order to resend any command from any time. Maybe even provide a means to save selected commands and retrieve and insert them later –now you have macro facility.
If the robot tends to be is especially cranky (frequent loss of communications, slow responses), then you may want to have this log table beside the controls for creating commands and viewing the current values. The controls should be set up to make the creation of a discrete command clear to the user. With a cranky computer-to-robot interface, spurious commands are costly, so each command should be well-planned and deliberate. Likely this means a set of field controls like text boxes and drop-down lists to set values of various attributes and a button that generates the command(s) for those values. Awkward, yes, but that’s an accurate representation of the communication link with the robot.
Alternatively, if typically the queues are nearly empty, then you may want to make this table available in a separate window for experts to troubleshoot problems with robot behavior. Normally then the users use one of the other two options I gave above.
Maybe you can use a variation of the command pattern. Each action by the user generates a command, which goes into a queue. The queue is visible to the user in a printout on the screen. So you do not disable the button, but allow the user to press the button multiple times but show the user that the command is queued. At the same time you do not update the labels showing current state of robot until you receive the state from the robot.
In the queue the command could show its status somehow, maybe text and colour. And maybe you should allow the user to delete a command before it is processed by the robot(if that is possible).
So the queue might look like this:
Command Status Result of Action
speed+5 pending speed will increase to 200 (Delete This)
speed+5 pending speed will increase to 205 (Delete This)
speed-5 pending speed will decrease to 200 (Delete This)
and so on.