work-flow for multi-tasking - Mac(Applescript) ¿is possible? - applescript

I am a computer science student at a university and my workflow usually has many open windows and I would like to know if in Mac in addition to the multi desktop horizontal scrolling it is possible to do a kind of vertical scroling between windows of the same desktop so that the change of windows either by a gesture either of the keyboard or of the trackpad that allows changing from one window to another of those that are on the current desktop as if it were a "circular" queue.
https://github.com/diegoalfarog/WindowQueueMac

Related

Is it possible to detect Windows on screen keyboard?

Context: our desktop application is used predominantly by users with significant access issues and therefore some of them use the Windows on screen keyboard. One bit of feedback we've had is that the software is awkward to use with the OSK as too many important bits of the UI get covered up. If we could detect the presence of the keyboard we could adapt the UI to some extent.
Therefore is it possible in code to detect that the Windows in-built OSK is open? And is it possible in code to detect the location of the keyboard?
The application is in WinForms and usually runs on full desktops/laptops - not smaller touchscreen machines.

Is a "monitor" in Gtk3 the same as a "Screen" in X11?

I wrote an application in Gtk3 using gdk_display_get_monitor_at_window and
gdk_monitor_get_geometry. Out of 75 gtk functions that I used, these are the only 2 that are causing problems for people trying to compile on Ubuntu because most people are not running a version of Ubuntu that has gtk3 3.22.
My application also uses X11, so I want to replace these functions with X11 equivalents. After briefly looking at X11, I have some questions:
Is a Gtk "monitor" equal to an X11 "screen"?
If the answer to 1. is yes, then what is a Gtk "screen" equal to in X11?
What is a "display" in each?
A complete table comparing display/screen/monitor etc in Gtk to X11 would be good.
A monitor is a physical device. A screen is a logical device, possibly complete with its own keyboard and pointer (mouse). A screen can span multiple monitors.
Normally there is only one screen (one keyboard, one mouse) on a personal computer, even if there are multiple monitors. Multiple screens are of limited utility for a PC as it is not possible to move windows between screens. A multi-screen setup works best for a multi-user machine where each user gets his own monitor, keyboard and mouse.
There is another variant of multi-screen setup where one can move the mouse pointers between screens (and so there is one mouse and one keyboard), but windows are still confined to their screens. This variant us thoroughly obsolete.
A display is a network server that can manage one or more screens (on a typical PC, just one screen).

Touch screen hide cursor

I have an existing windows XP based application that has 2 screens (and currently 2 PCs, one hosting each screen). One is a touch screen and the other a normal mouse driven screen. The touch screen is used for quick user actions e.g. touch for an action to be triggered. The application uses the mouse pointer within the non-touch application window to determine where to carry out the action. I want to get rid of the the PC hosting the touch screen, and just have a touch screen hosted on one PC (dual screen). However, if I do this is there any way to stop the cursor moving to the touch screen? I don't think I have focus issues because I can use WS_EX_NOACTIVATE within the touch screen application (the touch screen application only has to respond to touch events).
I have seen some internet posts saying that the cursor can be hidden via the touch panel configuration (if supported), but does anyone know whether these is windows OS support for this? I have freedom to move to Windows 7 if this provides the answer. I also don't particularly want to capture the events at the device level (before reaching the OS).
Windows XP doesn't have native support for touch screens (because at the time it was written there were almost no touch devices), so the touch events that come from a touch screen are treated as mouse events on Windows XP. I don't think there is any way to make a difference between a touch and a mouse click in Windows XP (at application level after reaching OS, not at device level before reaching OS).
Windows 7 on the other hand introduced real support for touch. Whenever a touch event happens, you get a WM_TOUCH message which is very easy to use and, of course, has nothing to do with the mouse.
In conclusion, I think you should upgrade to Windows 7 as it has way better support for touch input. If you decide to go with the Win7 WM_TOUCH, here's another article that should be helpful to you.

Mac OS X vs. Windows mouse wheel scrolling

Mac OS X determines what area to scroll by the mouse position. Windows does this by what application is active.
So I thought anyway. If Notepad++ is the active application in Windows, I can scroll underlying applications by placing the mouse pointer on them. But this seems like the only application with this behaviour. Windows Explorer (Win7) doesn't even allow scrolling in the side pane if the pane is not active.
My question is, can this be controlled by developers, and why is Windows behaving like this? I am not about to make a Windows application, but as a developer this makes me curious (and annoyed).
There are a number of applications that do the focus-window-under-mouse, but I like Alt-Drag: http://code.google.com/p/altdrag/
Since the code is Open Source, you may find something reusable there.

Does Windows 7 treat full-screen applications differently?

I have a hidden process that waits for non-standard hardware button messages and runs an application (with CreateProcess). No problem with the user disturbing, it's an action that the user approved himself. Everything is fine when it's usual layout with taskbar shown and multiply captioned and non captioned- windows. But the situation is different in XP and 7, when the current application is full-screen. Full-screen application in this case is window without borders having exactly the same dimension as the screen. Windows hides taskbar for such application even if it's always on.
In Xp, it's ok, the taskbar is being shown in this case and appication (for example calculator) also, the full-screen app is still visible in areas other than the launched app's and taskbar'. But in Windows 7 nothing visual happens, the full-screen app is still on and if I switch to taskbar, the executed application is there. I tried to solve it with SetForegroundWindow, BringWindowToTop, even AllowSetForegroundWindow(GetCurrentProcessId()) call for a window handle found with CreateProcess-WaitForIntputIdle-EnumThreadWindows, no change. So did something change since XP related to full-screen windows that officially documented?
Thanks,
Max
I would imagine that, if you have your own hardware device, that there is some API for generating "real" user input. Clearly the legacy keyboard and mouse, and now USB HID drivers (many of which are usermode I think?) have access to an API to do so.
Synergy+ for example can generate fake keyboard and mouse events on connected PCs, and the consequence of the faked input is windows switching activation normally.
So, my initial idea is for your usermode "Device" application to synthesize actual keyboard messages - SendInput seems a likely candidate for "the API that can "fake" real user input events.
Then, use an API like RegisterHotKey in your "UI" app to respond to the hotkey combination your device app generates.
Now, (assuming that SendInput IS generating user input events at the correct level), you should (from within the WM_HOTKEY handler in your UI app) have permission (because everything was "user initiated") to change the foreground window (to yourself).
Vista introduced the desktop composition feature. In short, all windows are drawing to a memory bitmaps and the Desktop Window Manager is then composing these bitmaps and drawing on a full-screen Direct3D surface. Full-screen windows do not participate in the desktop composition and get to draw directly on the screen (mostly because the majority of full-screen apps are games that need real-time screen updates).
In particular, this means that when a full screen app is up and running, it is covering the DWM composed image and the user needs to switch to a DWM-managed window for the DWM to start drawing on top of the full-screen app.
I don't have a good solution for your problem, unfortunately. One way to solve it would be to add the WS_CAPTION style to your app and then handle WM_NCPAINT/WM_NCCALCSIZE/WM_NCHITTEST yourself. This would allow you to lie to the DWM that you are a regular windowed application, but change visually your NC area to look like you have no title. However, this does require certain amount of additional code and might be a bit more effort you want to invest.
Another way you can try to solve your problem is to explicitly minimize your full-screen application window when launching the new process. However, you will then have to solve the problem of when to maximize it back again.
Btw, you might find the comments on this post from Raymond Chen interesting.
Windows supports multiple desktops and my guess would be that the full screen up is using a different desktop than the default one (where your application will be shown). A desktop object in Windows is "a logical display surface and contains user interface objects such as windows, menus, and hooks". For example, screen savers normally are started on a separate desktop.
You can find out which desktop an application is running on using Process Explorer:
Set Process Explorer to replace Task Manager and to run always on top.
When your full screen up is shown, launch Process Explorer by pressing Ctrl + Shift + Esc
Within Process Explorer, select the full screen process and press Ctrl + H to display the handles of this process
See the value of the Desktop item in the list. Usually this would be set to Default
If you know what desktop this app is running on you can start your process on the same desktop by first calling OpenDesktop to get a handle to this desktop and then pass it into the STARTUPINFO of your CreateProcess call.

Resources