How can I automatically control a terminal? - terminal

I'm using 20 identical simulators (text base GUI like vi, refreshed quickly), and I need to control them in very similar way. e.g. input some command string to start/stop/config the simulator. The display is important, and I need them to flow on the terminal. Currently I can automatically start each one in a separate terminal.
But after that, I have no idea how to control them automatically. If I spawn the simulator using expect without a terminal, I will not be able to watch the output. Any suggestion on how could I proceed, or what tool could help?

This is tricky. You might be able to send the exact escape sequences which are generated by your keystrokes to your curses based program and then drive it. I don't know how reliable or easy that will be.
Would it not be possible to create an alternate front end which is scriptable for your simulator and use that for automated tasks like this rather than the CUI interface which is meant for human interaction?

Related

Manipulate OS X windows with script

Ok, so I'm trying to make my setup super simple by creating a script that I can run in the morning that will launch all the applications that I use in the day and lay them out across my 'spaces' how I like them.
This was going ok and I was easily able to have a bash script launch the apps and then call to an AppleScript to move and resize their windows.
However, I like to use the new El Capitan feature and have some of my spaces as 'split view' spaces. E.g. Full screen Xcode/Terminal split. I can't seem to find a way to control this via a script.
Tl;dr Does anyone know how to get a bash script/AppleScript to put two applications into 'split view' on OS X El Capitan?
Looks like that first bit about launching and full-screening apps can be done with a fairly simple script, though it requires enabling Accessibility permissions first. It, however, won't do the split-screen bit.
I kept looking though and Better Touch Tool (pay what you want, $4.49 minimum) seems to get the closest of anything I could find, allowing you to trigger Full Screen mode and bring up the split screen Expose selector in the same action. It seems to be doing this by emulating a long mouse down on the full screen window control button (the green one in the top left). What you get is this:
I've been playing around with this and it seems there might be a (so far seemingly very un-intgelligeble, though reliable) way to control the order of full screen apps and trigger an app into split screen mode in a situation where that previously full-screened app is the only option available for splitting the screen.
For example, given the following, accomplished by launching iA Writer into full screen (space 2) via ⌘+^+F:
Focusing Safari and using Better Touch Tool to trigger split screen mode results in:
... Only one split screen app, even though there's several apps still running.
From this position you could use the "move to position" action in BTT and trigger a click on the only available app— I would think this could theoretically accomplish what you want, although it's convoluted and a bit suspect.
All that being said, it seems like the only way to get two apps launched into split screen mode without touching the mouse, since this could all be a BTT workflow you trigger from an Automator script. Digging further, you might be able to learn how BTT accomplishes their actions and write a program that does this for you, but we're already way beyond bash or simple cli scripting.
I personally just use Spectacle and tmux to zoom my windows around, though I admit, automated split screen would be somewhere close to live changing.

How can one view the applescript code that executes on particular application?

There is one application that controls Microsoft Word 2011 for Mac using AppleScript.
It does really nice things that I want to implement in my own app.
So, is it possible to intercept AppleScript calls to particular application, and reconstruct source code of AppleScript that made that calls?
It is impossible to view source code of applescript that executes on particular app.
But debugging apple events, can make sense to cast a light on what is going on.
So I just opened Terminal.app and executed a command:
env AEDebugReceives=1 /Applications/Microsoft\ Office\ 2011/Microsoft\ Word.app/Contents/MacOS/Microsoft\ Word
That will force Microsoft Word (in fact almost any application) to print all received apple events in terminal.

is there a winapi call or keyboard shortcut to enter windows console into "mark" mode?

Normally user is doing it by clicking right-mouse into console title bar then selecting "edit" and finally "mark". -> http://www.megaleecher.net/Copy_Paste_Text_Dos_Window
So is there a way of doing it from a console application either by sending a message/api call/keyboard sequence to its own window ?
If this is your own application and you want the richer behaviour and flexibility of a windows app rather than a console app, then use a windows app. Otherwise, you can try to automate the steps by simulating the input via SendInput. I would advise against doing this because it requires two steps (once for right-click, once to select 'Mark'). This means if someone clicks something else between these two events, your sequence will be broken. Furthermore you are really relying on the automation of an implementation detail which is prone to change at any point.
Looking through the Console Functions, it doesn't appear as though anything exists for setting the selection. The closest is going the other way with GetConsoleSelectionInfo.
If you want to process the information that is within a console application, a better alternative is to pipe it to your own process and deal with it there.
Found: PostMessage(GetConsoleWindow(), WM_COMMAND, 65522, 0);

How to: Simulating keystroke inputs in shell to an app running in an embedded target

I am writing an automation script that runs on an embedded linux target.
A part of the script involves running an app on the target and obtaining some data from the stdout. Stdout here is the ssh terminal connection I have to the target.
However, this data is available on the stdout only if certain keys are pressed and the key press has to be done on the keyboard connected to the embedded target and not on the host system from which I have ssh'd into the target. Is there any way to simulate this?
Edit:
Elaborating on what I need -
I have an OpenGL app that I run on the embedded linux (works like regular linux) target. This displays some graphics on the embedded system's display device. Pressing f on the keyboard connected to the target outputs the fps data onto the ssh terminal from which I control the target.
Since I am automating the process of running this OpenGL app and obtaining the fps scores, I can't expect a keyboard to be connected to the target let alone expect a user to input a keystroke on the embedded target keyboard. How do I go about this?
Edit 2:
Expect doesn't work since expect can issue strokes only to the ssh terminal. The keystroke I need to send to the app has to come from the keyboard connected to the target (this is the part that needs simulation without actually having a keyboard connected to it).
Thanks.
This is exactly the domain of Expect , which stackoverflow incidentally recognizes with its own tag.
The quickest way to achieve the OpenGL automation you're after, while learning as little of Expect as necessary, is likely by way of autoexpect.
I'm not at home right now (so no linux at hand), so can't actually try it out. But you should be able to emulate keystrokes if you echo the desired keypresses (possibly the keyboard codes) to the keyboard node located under /dev on your target. This could be done through your ssh.
A solution which would satisfy both QA and manufacturing test is to build a piece of hardware which looks like a keyboard to the embedded device and has external control. Depending on how complex your input needs to be this could be anything from a store-bought keyboard with the spacebar taped down to a microcontroller talking PS/2 or USB on the keyboard side and something else (serial, USB, ethernet) on the control side.
With the LUFA library it is remarkably easy to make a USB keyboard with AT90USB series parts. Some of them even have 2 USB ports and could be automated by USB connected to another system (or if you want to get cheeky you could have it enumerate both as a keyboard and the control device and loop the keyboard input through the embedded system).
How about echo "f" > /dev/console ?
How about create text file with inputs and run as follows?
cat inputs.txt | target_executable
Contents of inputs.txt can be something like follows,
y
y
y
n
y

Unable to use X clipboard in Screen

I read the following code in Unix Power Tools on page 117
*VT100.Translations: #override\
Button1 <Btn3Down>: select-end(primary,CUT_BUFFER0,CLIPBOARD)\n\
!Shift <Btn2Up>: insert-selection(CLIPBOARD)\n\
~Shift ~Ctrl ~Meta <Btn2Up>: insert-selection(primary,CUT_BUFFER0)
I have not managed to see any effect of the above code.
How can you use X clipboard in Screen, without your mouse?
Using the mouse. Left-click drag to select and usually the middle mouse button pastes but some terminals may differ (PuTTY uses right-click). If you only have two buttons you click them both together (left mouse button + right mouse button).
In reply to comment below ("Can you do it without your mouse?"):
ctrl-insert : copy
shift-insert : paste
shift-delete : cut
shift-ctrl-C : copy
shift-ctrl-V : paste
Not all applications will support the last three (though Konsole does). In fact most console applications will not allow you to delete text once it's printed.
As far as selecting text without a mouse I'm not sure there's a generic mechanism for that. It's probably terminal and/or application specific (ie, vim has it's own keys for marking and copying text - but only within vim). You could do it with mouse emulation but I'm sure that would be a painful process.
You can't use the traditional Mac/Windows shortcuts in a terminal because they were reserved for different actions long before these OS existed (ie, Ctrl-C terminates the running process).
I'm trying to use Ctrl-C in X
X does not handle these operations directly, they are handled by the application. That's why modern GUI programs like Firefox or Gedit support Ctrl-C for copy but terminals and command-line programs generally do not. As I said, it's a conflict in established conventions and Ctrl-C for kill got in first.
BTW, you could do some key-remapping if it drives you nuts but then you would be learning bad habits when you use a different machine. Best to just get used to it or do most of your editing in a GUI application.
More Information
EDIT: For a Mac, this may help: MacOSX-to-Konsole or This or This. It looks like you need to replace Ctrl with Command on Mac keyboards. It seems like Terminal the mac console has a right-click context menu for copy-paste so to do it the traditional way you me need to install a different console program or change some settings in Terminal.

Resources