Way to use something like PyAutoGUI on Mobile? [closed] - user-interface

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm currently using Qpython and Sl4A to run python scripts on my Droid device.
Does anybody know of a way to use something like PyAutoGUI on mobile to automate tapping sequences (which would be mouse clicks on a desktop or laptop)?
I feel like it wouldn't be too hard, but I'm not quite sure how to get the coordinates for positions on the mobile device.

Something like PyAutoGui for Android is AutoInput. It's a part of the main app Tasker. If you've heard about Tasker, then you know what I'm taking about.
If not, them Tasker is an app built to automate your Android Device. If you need specific taps and scrolls, then you can install the plug-in Auto Input for Tasker. It's precisely allows you to tap a point in your screen.
Or, if you have a rooted device, then you can directly run shell commands from Tasker without the need of any Plug-in.
Now, you can get the precise location of your x,y coordinate easily.
Go to settings, About Phone, and Tap on
Build Number until it says, "You've unlocked developer options."
Now, in the settings app, you will have a new developer options. Click on that, then scroll down till you find "Show Pointer Location" and turn that on. Now wherever you tap and hold the screen, the top part of your screen will give you the x,y coordinate of that spot.
I hope this helps. Please comment if you have any queries.

Unfortunately no. PyAutoGUI only runs on Windows, macOS, and Linux

Related

Building system-wide macOS text editing tools [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
So, these little popups you get when typing on macOS:
They're unique in that they allow users to trigger an action immediately, not from the menu bar or a contextual menu (e.g. right click) They're common among most OSs' in some form, this just happens to be Apple's design.
This popup is
1. System wide, working in browsers, many text editors, etc.
2. Always where your eyes are when typing, at the text cursor/most recent word typed.
As this window is always below the cursor, I figure there's space above to add other typing tools that work in a similar way.
For any typing tool like that it really has to be system-wide, not just in a particular app.
I'm struggling to find useful leads as to which APIs cover this, and whether it's even possible to access this area of the macOS system.
You may be able to program such a feature as an input method, using InputMethodKit. Input methods are most commonly used for Asian language input, where there are many more characters than keys on a keyboard. However, the API is more general than that.
Insofar as the pop-up you're citing as an example is a spell-checker, you may be able to customize it by building a custom spell server. I assume the user will have to select your spelling service explicitly, though, for each document.

In MacOS, is it possible to programatically control the screen magnification feature provided by accessibility options [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm visually impaired and I use Mac OS screen magnification all the time.
It works well.
I scroll the mouse wheel while holding the control key to zoom in and out.
I can see any region of the magnified screen by moving the mouse.
Is there an API to programatically control these features?
I'd like to be able to automate some gestures, similarly to IDE macros.
For instance, I'd like automatically adjust zoom and focus to new dialog boxes as they show up on screen.
If there isn't an API to directly control magnification, would it be possible to simulate the keystrokes and mouse gestures that activate the zoom features?
Here is a documentation from the Mac os x developer library that describes the concept of "Accessibility Programming" where you would be able to find what you need. It covers the aspect of programming in order for the clients to have a special "accessibility" to your applications.
The accessibility API provides protocols that define how accessibility
clients interact with your app
You should start from here before trying to implement the magnification control.

How do you enable the expose feature on Windows 8? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
the other day I was moving my laptop with wireless mouse. I accidentally dropped my mouse and I don't know which buttons were clicked. However, the expose feature (just like on a Mac) came up and I was really impressed! I like Mac and own one, but am still a Windows guy at heart. Anyone know? I am not talking about installing external software, I know the feature is built into Windows 8 because I saw it firsthand. Searches for expose, tips and tricks online turned up nothing. Thanks!
Okay, I figured it out. I love Microsoft... This might work with any external mouse supporting three buttons. I am using a wireless Microsoft USB mouse with three buttons, with the middle supporting scroll functionality (a wheel). If I hold the left or right mouse button down and then click the scroll wheel, the expose comes up. I can also let go of both buttons so that if I use my scrolling, I can select which application I want to focus on (or just click the app you want too). The only thing I did not figure out is that when this first happened, the apps surrounded my desktop in the expose. This time, they just appear as an organized grid. Solution by webappguy-dot-com.
windows 8 has no native "expose" like mac has.
still, there is software around the web that can do it.
to trigger app scrolling do alt+tab

How do I take a screengrab of a single window using Cocoa or Carbon on OS X? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need to be able to identify a single window on the user's screen and take a screen capture of it. The screen data is to be stored in memory and not written to disk.
This is already supported through the commandline tool /usr/sbin/screencapture or through the Grab utility (though their functionality is not extensive enough to justify me launching them as a subprocess).
References / Hints
nm /usr/sbin/screencapture returns private Cocoa interfaces including _CGSGetSharedWindow that appear to do this.
Third party application Snapzpro does this (but does not provide source code)
Mac OS X 10.5 introduced the Quartz Window Services API to do just this.
The first thing that came to mind was GrabFS from MacFuse. The source is here.
Command+Shift+4 to activate the screenshot selection, then tap the space bar to select the whole window.

Game UI HUD [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
What are some good tools and techniques for making in game UI? I'm looking for things that help artists-types create and animate game HUD (heads up display) UI and can be added to the game engine for real time playback.
If you are working with a middleware environment like Torque or Unity3D, they include a GUI framework to build on. Flash is an ideal tool, but to use in anything other than a Flash or Shockwave3d game you need to purchase ScaleForm too, which is expensive and isn't easy to get hold of for indie developers. WPF and Silverlight look promising for this purpose, but so far haven't been set up for game integration.
Unfortunately, for many developers the only solution is to roll their own UI components from scratch.
Using flash will give the highest productivity for the graphical artist (well - if he knows flash).
You may want to have a look at gameswf. It's a bit dated but seems like a perfect match for your problem.
http://tulrich.com/geekstuff/gameswf.html
Another option would be to just do the entire UI in your 3D content-tool and use your animation system to play back the transitions.
One option is to use Flash in conjunction with a package called ScaleForm. This allows the artist to make the UI in flash and then ScaleForm executes the flash in game.

Resources