When I control Seekbar in Android Wear device, red circle with x button pops up.
There is no exception or force to close dialog. If I push the button, it just terminate the app.
I wonder why red circle with x button pops up and how to avoid that.
thx!
The "red circle with x" is a DismissOverlayView, that is,
A view for implementing long-press-to-dismiss in an app.
(as described in the documentation for the Wearable UI Library).
The idea is to provide an alternative way to close an app when you specifically prevent swipe-to-dismiss (by setting android:windowSwipeToDismiss="false"), for example when swiping is an integral part of an app.
A canonical example (as explained in this Google I/O talk, around the 23:00 mark) is when displaying a ViewPager on the drawable: you may not want the user to accidentally exit the app by swiping right from the first page, but you still need some way to exit it.
So they settled on long press as the pattern for doing so.
Related
I am building a mini project where I want to have a color picker feature. I found the Digital Color Meter on macOS is helpful. I did some search and found I can write a simple AppleScript to activate the application.
It would be awesome if I can monitor the next left-click mouse event (even if I am not focusing Digital Color Meter application). By monitoring the next left-click mouse event, I want to get the value from the R, G and B options as you can see from the image above.
I think mouse event handler should be able to achieve (though I don't know how to do it). I am not sure whether I can read the value from the Application...
Simply use
choose color
from Standard Scripting Additions.
In the color picker window there is also a eyedropper (pipette).
When I double-click on an animator controller to launch it, the animator tab appears, but when I run the editor, I don't get the usual flow, operations, etc... I only get a static view of the states and transition arrows between them. My parameters do not show the changes they go through either.
I have multiple animations and can switch between them when certain game conditions occur, but nothing really shows when I do so, to see the flow of control, what happens, what goes wrong, the switching, the progress bar, etc...
I have the latest Unity 5.2.0f3 so I wondered if it is just me or others are having a similar problem...
What we need to do is this: Once we hit the play in the editor mode (and have the animator window docked on one side, of course) we just go and click the object in the hierarchy for which we want to analyse the animation flow. And the animator window will start showing the states and the progress bar.
Also, after upgrading to Unity 5.2, it is worth checking the values that were previously set for transition states, for example if vSpeed is greater than 0.1 then start walking. All my set values were messed up; i.e. changed.
I need a window to 'point' to the icon that was clicked on in the dock, similar to the way the context menu has the little callout-arrow pointing to it. This means I need to get the screen location of the dock, or more accurately the DockTile. (Yes I could use the mouse coordinates, but that doesn't look as good as it 'moves'.)
Now my thought is to get the associated view (I already have that), then use view-to-screen coordinate conversions, but that's becoming problematic as the x/left and y/top values of the bounding rectangle always say zero. I know that's because there's a nested hierarchy of views as well. Problem is I've walked it and always end up hitting a road block.
So thoughts?
Mark
You can get the dock icon positions using the accessibility API, there's some excellent sample code and app from Apple here.
I am trying to make a simple application in which there is a empty red rectangle and whenever the mouse is moved over the upper half border of the rectangle the cursor will become closed hand.
I started with selecting the foundation command line project.Made a transparent NSWindow and embedded a NSView in it with the rectangle, made window to accept mouse moved events(by method: -setAcceptsMouseMovedEvents). I have overridden -canBecomeKeyWindow and -canBecomeMainWindow window to return YES. But somehow none of the -mouseMoved events are being received by NSView.
When I put the same code by making a cocoa application project and creating my window in -applicationDidFinishLaunching method , my view was able to receive -mouseMoved events.
why is it not receiving mouse moved events when I use foundation command line utility project ?
I have also observed that whenever I make a window(carbon or cocoa) through foundation cmd line utility project , the window doesn't become key even on clicking the title bar.On clicking the title bar color remains light grey instead of becoming dark grey. Why is this happening?
I have overridden -canBecomeKeyWindow and -canBecomeMainWindow of NSwindow to return YES.
I would agree with what Joshua has already said. Any application that is going to show a user interface, be it a faceless background process or one which shows up in the Dock, should be in the form of an application bundle, not a plain old Mach-O executable like the Foundation tool template will create.
Also, there are reasons why views do not respond to mouseMoved: events by default:
Mouse moved events can quickly flood the event queue
There is generally little reason to use mouseMoved:, as tracking areas are
far more effective and efficient.
A while back, I wrote a little test app that demonstrates the differences between these 2 approaches:
Moving your mouse around the upper view for roughly 20 seconds results in 1000 events, while in the lower view, which uses tracking areas, less than 50.
Sample GitHub project: https://github.com/NSGod/MouseMoved-vs-TrackingAreas
Again, as Joshua mentioned, it would be helpful if you could describe what you're trying to accomplish. If your app needs to be a background app (LSUIElement == 1), and present an interface without appearing in the Dock, then there are ways to do that (as Josh mentioned, a command-line, non-bundled app is not the way).
You have no event loop to detect events and pass them to your window because your program does not start an NSApplication. See the main.m file of a typical Cocoa application.
It might be helpful to describe what you're trying to accomplish by taking this approach. My guess is you're building a daemon but want a GUI interface to manage the otherwise "headless" daemon. That or you're building a new login management system. In either case, there are specific ways to do both and this isn't it. :-)
Whenever I write mouse handling code, the onmousedown/onmouseup/onmousemove model always seemed to force me to produce unnecessarily complex code that would still end up causing all sorts of UI bugs.
The main problem which I see even in major pieces of software these days is the "ghost mouse" event where you drag to outside the window and then let go. Once you return back into the window, the application still thinks you have the mouse down even though the button is up. This is especially annoying when you're trying to highlight something that goes to the border of the screen.
Is there a RIGHT way to write mouse code or is the entire model just flawed?
Ordinarily one captures the mouse events on mouse down so the mouse move and mouse up go through your code regardless of the caret moving out of you application window.
More recently this is a problem when running a VM or remote session, its difficult for apps in these to track the mouse outside of the machine screen area represented by a window on a host.
I'm not sure what environment you're attempting to track mouse buttons in, but the best way to handle this is to have a mouse listener that tracks onmouseup 100% of the time after you've detected onmousedown.
That way, it doesn't matter what screen region the user releases the mouse button in. It will reset no matter where it happens.