Using USB keyboard device to trigger terminal commands on raspberry pi - terminal

I have a small robot which runs on a raspberry pi.
I need to be able to control it with an RF remote to trigger a few different terminal commands which run short python scrips.
Previously I did this with a GUI on my macbook, triggering these commands over ssh, but I now need to be able to trigger them in the absence of an internet connection.
The remote I bought is:
https://www.adafruit.com/products/3092?gclid=CNPj7LjTgNECFdOPswodsiULYA
I realize that this is designed for OSMC.
This remote shows up as a USB keyboard on the Pi, which makes the challenge more general:
* - How could one rig a 'USB keyboard device' to trigger entire terminal commands with the click of one key?*
My low-level knowledge of hardware is limited, and my programming experience extends little beyond python.
Any direct solution or suggested reading is much appreciated.
I am also open to alternatives, however I do not have time to order new hardware online.

You could write a python script which uses the 'os' library to interact with the terminal, have it looking for certain keystrokes and then writing the relevant commands to the terminal via os.
e.g
if (KEY == "F"):
os.system("cd Dropbox")
I have no idea how to do formatting here, but you get the idea.

Related

Is there a faster way to connect to a Bluetooth device from cmd/powershell than btcom from Bluetooth command line tools?

I've put together a Powershell script to allow me to connect my Bluetooth headphones to my PC without having to open the Bluetooth settings page each time (based on the ones in https://github.com/stanleyguevara/win10-bluetooth-headphones, but using Get-PnPDevice and Get-PnPDeviceProperty to check whether the device is connected rather than using an environment variable to save the state).
The script works, but there's one big QoL issue. The script uses the Bluetooth command line tools here to connect/disconnect the device (in particular, it uses the btcom command). However, these commands are very slow to run, with the whole process taking around a minute total. This is true even though I am using the device's MAC address to connect, and not its friendly name (which would be even slower). This makes using the script much slower than just opening the settings panel each time (though opening the settings panel is less convenient since it requires opening and going through multiple windows).
I've seen many questions about this sort of thing (how to connect/disconnect a Bluetooth device from cmd/powershell, but everything I've seen regarding Windows tends to suggest using the Bluetooth command line tools at the link above, so they don't solve the speed issue. Other things I've found suggest disabling the Bluetooth adapter entirely, which isn't what I want to do. Others suggest using the Win+K shortcut to open up the connections sidepanel, but this doesn't really address the question of whether there's a way to do this from cmd/powershell, and is slightly less automated since you have to wait a second for the list to populate and manually navigate to the device to connect/disconnect (though at least it solves the problem of opening a bunch of windows).
Is there a way to connect/disconnect from a Bluetooth device in cmd/powershell that is faster than btcom?

Golang detect if in focus or minimized

How would I go about detecting if my go CLI program is in focus or minimized?
Current program based off https://github.com/jroimartin/gocui
I require the functionality as it is a chat program and I would like to send OS notifications but only when the program is not in focus or is minimized.
Your help/direction is much appreciated as of right now unsure where to start.
This is not possible from the library itself. A command line program does now have a focus, but the terminal program it is running in.
To implement that (if possible at all) would be dependent on os, window manager etc.
To refine the answer provided by #mbuechmann, I suggest the OP not to try to resort to APIs etc.
The reasoning is simple.
"Contemporary" users are used to running programs in terminal emulators which are typically presented as separate windows, and so the users naturally think of these programs as not really different from GUI apps.
But the reality is different: a terminal emulator—whether graphical or not (for instance, so-called "virtual consoles" provided by the Linux kernel running on an x86/amd64 hardware are terminal emulators as well)—really emulates a typical work session on a real hardware terminal, and there, a program would work in foreground solely, and the only means of "switching" to another program was using the shell's job control (those jobs, bg and fg commands).
In other words, the whole concept of a program working in a terminal has an inbuilt assumption that the terminal is always "foreground"—since at the time the concept was developed, a terminal was a physical device.
Now please also consider that "terminal emulation" may be more pervasive on a contemporary system than you might think: screen and tmux on a Unix-like OS are multiplexing terminal emulators—which may themselves be run in a terminal emulator, and a console window on Windows™ may be considered to be a terminal emulator of sorts as well.
So, "resorting to APIs" have several technical problems:
Terminal emulation tries to actually decouple the program which uses this facility from being aware of how the facility is actually provided.
To put it simple, there's, say, no easy way on X Window System, to know what window is used by the terminal emulator running your program.
You'd need to cover diverse set of APIs in order for your program to still be useful: X Window System on Unix-like systems, Mac OS, Windows™. And contemporary GUI stacks running on Linux tend to be switching to Wayland instead of X.
In certain cases, like running a program in a "nested" terminal emulation sessions (for example, a pane in a "window" of a tmux running in xterm), figuring out such facts about the environment might be next to impossible.
And still the crucial problem is that if your program really needs to know whether it's focused or not, it actually wants to be aware about the concepts currently hardly accessible to it. I mean, it wants to be GUI. And if so, just make it GUI.
In fact, it may be simpler than you think. The core of your program might still be a CLI app with a thin GUI wrapper around it which uses any sort of IPC to talk with the app (which might be two-way, if needed).
The simplest is to write some (usually line-wise) data to the program's standard input.

How to create stand-alone mock up program running in raspberry

I need to create a very simple program, that should run on raspberry pi without network connection. The program should first show one full-screen bitmap ("insert disk"), and after receiving somehow an external signal (disk inserted), another bitmap which would ask to input password. After inputting the password (each pushed button shows an asterisk *) the application should show yet another bitmap, which would inform whether the password was correct or not.
So in principle I would like to create something that looks like password screen in any Hollywood movie!
Raspberry should boot directly to the application.
I was expecting that this would be easy to do (and it would be if we could use Windows and Visual Studio), but I haven't yet found a simple tool to create this for pi. Booting Raspberry into browser with kiosk mode and creating HTML application seems like an overkill.
Although a browser in kiosk mode might look that a sledgehammer to crack a nut, I think you might find this nut harder than it looks.
It wouldn't be difficult to write a simple app in Java, or Python, or perhaps even C using GTK, that carries out the actions you want. You could have the app loaded when X starts, as an alternative to a desktop and Window manager. You could even do away with X altogether, and write some code that interacts directly with the video framebuffer and the keyboard hardware. Or, heck, go the whole hog, and have your code substitute for the operating system kernel :)
I would guess that even the simplest of these approaches involves more work than hacking something up using a HTML and JavaScript in a browser.

How to get tabs in unix session on putty

I have used an script in past that enabled me to connect me to multiple unix machines, much like using a tab. Its just that I forgot the name of the script. Anyone know about it?
I suspect you're referring to GNU screen, which is a terminal multiplexer that allows you to have multiple virtual terminal windows in a single normal terminal window (ala PuTTY). I'd suggest tmux as a better alternative, but they're essentially the same. There are other solutions that will allow you to do tabs in the terminal client, but that depends on your OS (I'm assuming you're on Windows), and you'd have to initiate each connection individually. screen/tmux is the way to go most of the time.

running Mathematica remotely on macs

Here is what I want to do:
I want to run Mathematica on another Mac from my Mac (both Snow Leopards). I want to do this because the remote Mac has multiple cores/processors while my local Mac is rather shabby. I would like to have the front end still locally (i.e. the graphical interface).
What I've learned:
I used to do this type of thing from multiple Linux machines and was expecting to have similar success for Mac-to-Mac operation. However no such luck.
The problem seems to be a display issue (front end).
Mac front end runs in Aqua while X11 is what is really needed (this is why there is no problem on Unix). While Macs have X11, for some reason Mathematica can't use it.
So how do I get around this issue?
Possible solutions that I have had to rule out are: 1. screen sharing. Not practical since someone else will be using the remote Mac on another account. Screen sharing only uses the active screen. 2. Installing Unix on the remote computer. Not possible in my situation.
Thanks.
You should be able to set up a remote kernel on the other Mac. This is done through the Evaluation > Kernel Configurations menu item. The you can set the remote kernel for a given notebook using Evaluation > Notebook's Kernel or globally via Evaluation > Default Kernel.
I haven't done this in a while, and it's sometimes useful to test things from a terminal with something like
ssh <user>#<remote.machine.com> </path/to/remote/Mathematica.app/Contents/MacOS/MathKernel>
Why not use the command line kernel? I have a script math which does:
#!/bin/bash
rlwrap /Applications/Mathematica.app/Contents/MacOS/MathKernel
I built rlwrap from source, but basically that tool gives you readline behaviors. You can just do
ssh remote-machine /Applications/Mathematica.app/Contents/MacOS/MathKernel
The only solution, I believe, is for you to upgrade to OS X Lion. It allows simultaneous screen sharing sessions where each user can control the screen for their own account:
http://www.apple.com/macosx/whats-new/features.html#screensharing

Resources