replicating mouse motion using ajax - ajax

I want to implement an application on my website where once the users are connected, they share a text editor. If one user enters anything on
the text editor available on his screen, the same text appears on the second user’s
screen at the same coordinates.
Same thing goes for the other user. Also there would be pointer shaped images on both user’s screens to represent mouse pointers.
When user A moves his mouse pointer, the image on user B’s screen should be
moved according to the movement of user A’s mouse and similarly, when user B
moves his mouse, the image on user A’s screen should be moved accordingly.
The problem is I am using database to store the coordinates of each user. And this approach results in the a lot of lag and delay. What should I use in place of the database?? Please Help !

You probably don't want to request the updates but push them to your clients: http://en.wikipedia.org/wiki/Comet_%28programming%29 . This reduces the delay between an update from your client to the server and other clients again checking for updates.

how abt using Redis : http://code.google.com/p/redis/
an example of a similar collaborative text editor using this : http://www.web2media.net/laktek/2010/05/25/real-time-collaborative-editing-with-websockets-node-js-redis/

Related

Is there a way to update UISnapBehavior anchor point?

I'm building a custom container view that is a stack of cards piling on top of each other (each card is a child view controller.)
Now there is a motion where the user could "pick up" the card and I am currently implementing it like this:
On first touch-down-inside, I adds a UISnapBehavior. The card "snaps" during a small duration onto the user's finger.
After a small delay, UISnapBehavior is removed and UIAttachmentBehavior is put in place (so I can update the anchor point as the user moves his finger around.)
Now this presents 2 problems:
If the user moves his/her finger while UISnapBehavior is active, I'll remove the snap behavior and replace it with UIAttachmentBehavior but doing this causes a visible visual glitch where the card bounce a bit before attaching to the finger. This is in a small duration but still very visible.
If I use UIAttachmentBehavior directly, I cannot configure it to work in the same way the snap behavior works. Either there is too much spring animation or too little snapping feels.
Now what I wanted is the following:
When the user first touch the card, the card should snap onto the finger (there is a designated anchor point on the card where the finger should goes, so that point snaps to the finger when the card is touched)
When the user moves the card, the card should moves along with the finger.
When the user release the finger (touch-up) there should be a small residual acceleration going, this is already done using a UIDynamicItemBehavior.
So is there a way I can get UISnapBehavior-style attachment (to the finger) but still allows the anchor to move around?
Do I need to roll my own custom behavior for this?

Multiple cursor on windows application

I have founds some resource like this dealing with the subject of attaining multiple cursors on windows for more than one mouse attached to the system. My requirement is a little simpler but I need some inputs on it.
1) What I want is to invoke an application (lets say IE) and do mouse activity(hovering , clicking etc) on it. All of this while there should be no disturbance to the system cursor , which should be free to be used by the user of the desktop.
2)I understand that this can not be done using the windows cursor apis as the documentation always mentions "the cursor" and there is no concept of multiple cursors inbuilt.
3)
This leaves me to drawing a cursor on the target window. A bitmap perhaps and to move it randomly? What APIs will be of use here?
How do I simulate the visual effects that are actually done my actual cursor movement. Do I need to send messages to the target
window like WM_MOUSEMOVE , WM_SETCURSOR etc.?
Does sending mouse messages to the other window interfere with the mouse activities that the user is involved in currently? The
intention is not to disturb the user while the application runs.
Thanks for your inputs.
Create a separate window for each extra mouse that is installed, and put an image of the desired cursor in each window. Then use the Raw Input API to monitor activity on the extra mouses and move your windows around as needed, and use mouse_event() or SendInput() to send out your own mouse events (movements, clicks, etc) as needed.

How to track mouse movements without limiting it to screen size?

I'm using WM_MOUSEMOVE to get changes in mouse position. When simulating "knobs" for example it's desired to let the user go up/down with mouse without any limits. In this cases I hide cursor and use SetCursorPos to change its position every time user moves with it and detect just the difference from the original position.
Unfortunately it doesn't seem to work - if I set the mouse position, it sometimes works, but sometimes is one or more pixels away, which is just wrong. And even bigger trouble is that after the call another WM_MOUSEMOVE seems to be delivered, which unfortunately does the same thing as it wants to move the cursor back to the original position again. So it ends up in an infinite cycle or settings mouse position and receiving messages until the user releases the mouse button.
What's the correct approach or what's the problem?
The raw input system can do this - it lets you register for raw mouse input that isn't clipped or confined to the screen boundaries.
Broadly speaking, you register for raw input using RegisterRawInputDevices(). Your window will then receive WM_INPUT messages, which you process using the GetRawInputData() function.
See Using Raw Input for an example.
I hide cursor and use SetCursorPos to change its position every time user moves with it and detect just the difference from the original position.
This is just plain wrong. Instead, use SetCapture() to capture the mouse. All movements will be reported as WM_MOUSEMOVE messages with coordinates that are relative to the specified window, even if the mouse is outside of that window, until you release the capture.
Asking the user to move the mouse continuously, even after the cursor hit the screen limit is a very bad idea in terms of User Interface, IMHO.
Some games have another approach: when the mouse hit the "limit", the game enter a special mode: things appears to function exactly as if the mouse was moving, even if the user don't move it. When the user wants to exit that mode, he just has to move the mouse of the limit.
Doing so requires a timer, armed when the mouse hit some limit, executing code periodically as if the mouse was moving. The timer is stopped when a real mouse movement makes it leaves the limit.
Ok folks, so I found a solution simple enough:
The main problem is that SetCursorPos may not set the coordinates accurately, I guess it's because of some high resolution processing, nevertheless it's probably a bug. Anyway if SetCursorPos doesn't set the coordinates correctly (but +-1 in x and/or y) it also sends WM_MOUSEMOVE to the target window. As a result the window performs the exact same operation as before and this goes on and on.
So the solution is to remove all WM_MOUSEMOVE messages right after SetCursorPos:
MSG msg;
while (::PeekMessage(&msg, NULL, WM_MOUSEMOVE, WM_MOUSEMOVE, PM_REMOVE)) { };
Then retrieving the current mouse cursor pos using ::GetCursorPos .
It's ugly but seems to fix the problem. It basically seems that in some position of the mouse, the system always adds or subtracts 1 in either coordinate, so this way you let system do the weird stuff and use the new coordinates without trying to persuade system that your coordinates are the correct ones :).

When reopening an application, how do I restore the zoomed state without causing problems?

If the user zooms our app's main window, closes the app, then reopens it, we want to restore the last state of the main window, so we zoom it before showing the window. The app opens zoomed, so everything is great.
However, if the user clicks the zoom button again, nothing happens. I believe this is because the documentation for zoom: says for step 5:
Determines a new frame. If the window is currently in the standard
state, the new frame represents the user state, saved during a
previous zoom. If the window is currently in the user state, the new
frame represents the standard state, computed in step 1 above. If
there is no saved user state because there has been no previous zoom,
the size and location of the window do not change.
I think it doesn't unzoom because there's no user state, but I'm not sure why - shouldn't the user state be "the size the windows was at before it was zoomed"? If not, how can I make sure the user state is set properly so that the window un-zooms when the user clicks the zoom button again?
Edit: MrGomez responds below that this is basically how it's meant to work. This doesn't seem to be how other apps behave, though. Try Safari - Zoom it, then quit, then reopen the window, and it restores at the zoomed size. Click the zoom button and it goes back to a smaller size. iCal is the same. How are those apps doing it?
The trouble is, this is the expected behavior. As you mentioned here:
we want to restore the last state of the main window, so we zoom it
before showing the window
You've gone ahead and set the user's view as your standard view. The original default view has not been preserved.
This thread goes into detail about the expected functionality of zoom:. Quoting the accepted answer:
According to the documentation for the zoom: method (note the :), the
inverse of zoom: is zoom::
This action method toggles the size and location of the window between
its standard state (provided by the application as the “best” size to
display the window’s data) and its user state (a new size and location
the user may have set by moving or resizing the window).
If it's in the user state (not zoomed), it'll change to the standard
state (zoom), and if it's in the standard state (zoomed), it'll change
to the user state (unzoom).
The documentation also notes:
If there is no saved user state because there has been no previous
zoom, the size and location of the window do not change.
This is what will happen if you started the window out in its standard
state; since it was never in any other state, there is nothing for it
to unzoom back to.
The trouble is you've overrided the standard state; zoom:'s purpose is to toggle between the user's view and your interface's standard view. To save a user's window state for later restore, follow this guide.
And, for reference, here's the remainder of OSX's style guidelines and a very good tutorial for how you should work with this restriction.
Edit: As discussed, this is also a question of UI modality and the fact OSX windows may be bimodal (user state, maximized state) or trimodal (user state, standard state, maximized state) with a loss of the user state when an application is closed in maximized state. This appears to reproduce itself with stock-standard applications for OSX, including, for example, iCal.app.
As a result, the best that can be done here is to define a functioning UI paradigm that's consistent with applications for OSX, but carries many of the same idiosyncrasies of the little green glob.

Mango Secondary Tiles

I have some doubts regarding secondary tiles.
How many secondary tiles can be
pinned to the start screen for an
application (is there any limit)
Secondary tiles are pinned to the
start programmatically. Do we have
to ask the user whether he wants the changes to a particular section to be pinned to the start as secondary tile or without getting any confirmation from the user we can pin the secondary tiles.
There is no, currently published, restriction on the number of tiles that can be added.
Due to the screen real estate available on the start screen and the value of updated/animating live tiles which aren't always available (or at least in the first few "screens" worth of tiles) becomes decreasingly useful.
While pinning tiles can be done without a user initiated action, when a new tile is added the app closes and the phone returns to the start screnn to show the new tile. As such it's not possible to add lots of live tiles without the user being aware.
If a marketplace requirement isn't added regarding the behaviour around the creation of additional tiles I would expect it to be added quite quickly.
The potentially bad user experience of an app which repeatedly creates additional tiles without a specific instruction from the user would, I strongly expect, lead to that app not getting used much. ;)
In summary:
You can add as many secondary tiles as you wish but only do it when the user requests it and only make the option available to the user when it will add real value.

Resources