Programmatically/script to run zsh command - shell

As part of a bigger script I'm using print -z ls to have zsh's input buffer show the ls command. This requires me to manually press enter to actually execute the command. Is there a way to have ZSH execute the command?
To clarify, the objective is to have a command run, keep it in history, and in case another command is running it shouldn't run in parallel or something like that.

The solution I've found is:
python -c "import fcntl, sys, termios; fcntl.ioctl(sys.stdin, termios.TIOCSTI, '\n');
I'm not sure why, but sometimes you might need to repeat the command 2 times for the actual command to be executed. In my case this is happening because I send a process to the background, although this still doesn't make much sense because that process is sending a signal back to the original shell (triggering a trap) which actually calls this code.
In case anyone is interested, this was my goal:
https://gist.github.com/alexmipego/89c59a5e3abe34faeaee0b07b23b56eb

Related

How do I send commands to the ADB shell directly from my app?

I want to send commands in the ADB shell itself as if i had done the following in cmd.
>adb shell
shell#:/ <command>
I am using python 3.4 on a windows 7 OS 64bit machine. I can send one-line shell commands simply using subprocess.getoutput such as:
subprocess.getoutput ('adb pull /storage/sdcard0/file.txt')
as long as the adb commands themselves are recognized by ADB specifically, such as pull and push, however there are other commands such as grep that need to be run IN the shell, like above, since they are not recognized by adb. for example, the following line will not work:
subprocess.getoutput ('adb shell ls -l | grep ...')
To enter the commands in the shell I thought I needed some kind of expect library as that is what 'everyone' suggests, however pexpect, wexpect, and winexpect all failed to work. they were written for python 2 and after being ported to python 3 and my going through the .py files by hand, even those tweaked for windows, nothing was working - each of them for different reasons.
how can i send the input i want to the adb shell directly?
If none of the already recommended shortcuts work for you you can still go the 'regular' way using 'subprocess.Popen' for entering commands in the adb shell with Popen:
cmd1 = 'adb shell'
cmd2 = 'ls -l | grep ...'
p = subprocess.Popen(cmd1.split(), stdin=PIPE)
time.sleep(1)
p.stdin.write(cmd2.encode('utf-8'))
p.stdin.write('\n'.encode('utf-8'))
p.stdin.flush()
time.sleep(3)
p.kill()
Some things to remember:
even though you import subprocess you still need to invoke subprocess.Popen
sending cmd1 as a string or as items in a list should work too but '.split()' does the trick and is easier on the eyes
since you only specidfied you want to enter input to the shell you only need stdin=PIPE. stdout would only be necessary if you wanted to receive output from the shell
time.sleep(1) isn't really necessary, however since many complained about input issues being faster or slower in python 2 vs 3 consider maybe using it. 'they' might have been using versions of 'expect' that need the shell's reply first. this code also worked when i tested it with simply swapping out and in the process with time.sleep(0)
stdin.write will return an error if the input is not encoded properly. python's default is unicode. entering by binary did not work for me in my tests like this "b\ls ..." but .encode() worked. dont forget the endline!
if you use .encode() there is a worry that the line might not get sent properly, so to be sure it might be good to include a flush().
time.sleep(3) is completely uneccesary, but if your command takes a long time to execute (eg a regressive search through the entire device piped out to a txt file on the memory card) maybe give it some extra time before killing anyhting.
remember to kill. if you didnt kill it, the pipe may remain open, and even after exiting the test app on the console the next commend still went to the shell even though the prompt appearsed to be my regular cmd prompt.
Amichai, I have to start with pointing out that your own "solution" is pretty awful. And your explanation makes it even worse. Doing all those unnecessary things just because you do not understand how shell (here I mean your PC's OS shell, not adb) command parsing works.
When all you needed was just this one command:
subprocess.check_output(['adb', 'shell', 'ls /storage/sdcard0 | grep ...']).decode('utf-8')

How can I run all the bash process as background?

I don't think that running a process on foreground is any way useful. So I'd like to run all process on background. Is that possible?
Also tell me if there is any problem associated with doing so.
You can adapt the code from this question: https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command
Basically this uses the DEBUG trap to run a command before whatever you've typed on the command line. So, this:
preexec () { :; }
preexec_invoke_exec () {
[ -n "$COMP_LINE" ] && return # do nothing if completing
[ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # don't cause a preexec for $PROMPT_COMMAND
local this_command=$(HISTTIMEFORMAT= history 1);
preexec "$this_command" &
}
trap 'preexec_invoke_exec' DEBUG
Runs the command, but with & afterwards, backgrounding the process.
Note that this will have other rather weird effects on your terminal, and anything supposed to run in the foreground (command line browsers, mail readers, interactive commands, anything requiring input, etc.) will have issues.
You can try this out by just typing bash, which will execute another shell. Paste the above code, and if things start getting weird, just exit out of the shell and things will reset.
Do you mean bash script? Jush add & at the end. Example :
$ ./myscript &
While it might be possible to do something clever like suggested by #pgl, it's not a good idea. Processes running in the background don't show you their output in a useful way. So, if all processes are automatically sent to the background, your terminal will be flooded with their various standard output and standard error messages but you will have no way of knowing what came from what, your terminal will be next to useless and confusion will ensue.
So, yes there is a very good reason to keep processes in the foreground: to see what they're doing and be able to control them easily. To give an even more concrete example, any program that requires you to interact with it can't be run in the background. This includes things that ask for Continue [Y/N]? or things like sudo that ask for your password. If you just blindly make everything run int the background such commands will just silently hang.

can the shell be told to save command output?

I'm thinking a hypothetical CMDOUTPUT would be useful:
locate -r 'regexp...' # locate finds a file: /myfile.
# Shell puts `/myfile' string into CMDOUTPUT
vim $CMDOUTPUT # No need to run locate again as with: vim `!!`
The locate command above is just an example. I want the output saved for all commands that I run so that if I need it I can access it quickly. (The output should still be printed by the command to stdout.) I don't want to do
CMDOUTPUT="$(...)"
or
command | tee /tmp/cmdoutput
or anything else that I have to do because that's more typing for me at the prompt for everything that I run: I want the shell to do it all in the background. Again, to make it clear: I am casually typing commands away and decide "Oh, I want to use the output of that last command in this command, let me just retrieve it...". Can I tell the shell to store the output somehow so that I can retrieve it.
If there's no option for it, is there some way that I can implement it that is as close to invisible as it can be, meaning exit codes from the command are not lost (...and that's all I can think of, but I'm sure there are other subtleties) etc. I'm primarily thinking of zsh, but answers for any shell would be useful.
I found a solution, not sure if this is exactly what you're looking for. But it should provide a start :)
zsh | tee log >&1

How can I flush the input buffer in an expect script?

I'm writing an Expect script and am having trouble dealing with the shell prompt (on Linux). My Expect script spawns rlogin and the remote system is using ksh. The prompt on the remote system contains the current directory followed by " > " (space greater-than space). A script snippet might be:
send "some command here\r"
expect " > "
This works for simple commands, but things start to go wrong when the command I'm sending exceeds the width of the terminal (or more precisely, what ksh thinks is the width of the terminal). In that case, ksh does some weird horizontal scrolling of the interactive command line, which seems to rewrite the prompt and stick an extra " > " in the output. Naturally this causes the Expect script to get confused and out of sync when there appears to be more than one prompt in the output after executing a command (my script contains several send/expect pairs).
I've tried changing PS1 on the remote system to something more distinctive like "prompt> " but a similar problem arises which indicates to me that's not the right way to solve this.
What I'm thinking might help is the ability for the script to tell Expect that "I know I'm properly synchronised with the remote system at this point, so flush the input buffer now." The expect statement has the -notransfer flag which doesn't discard the input buffer even if the pattern does match, so I think I need the opposite of that.
Are there any other useful techniques that I can use to make the remote shell behave more predictably? I understand that Expect goes through a lot of work to make sure that the spawned session appears to be interactive to the remote system, but I'd rather that some of the more annoying interactive features (such as the horizontal scrolling of ksh) be turned off.
If you want to throw away all output Expect has seen so far, try
expect -re $
This is a regexp match on $ which means the end of the input buffer, so it will just skip everything received so far. More details at the Expect man page.
You could try "set -o multiline" or COLUMNS=1000000 (or some other suitably large value).
I have had difficulty with ksh and Expect in the past. My solution was to use something other than
ksh for a login shell.
If you can change the remote login to other than ksh (using the chsh command or editing /etc/passwd) then you might try this with /bin/sh as the shell.
Another alternative is to tell KSH that the terminal is a dumb terminal - disallow it from doing any special processing.
$ export TERM=""
might do the trick.

How do I pause to inspect results of sh commands run by a Makefile?

So, I have a Makefile which runs different commands of how to build S/W. I execute make from within a MSYS / MinGW enviroment.
I found for example, the sleep <seconds> command, but this only delays the execution. How can I make it wait for a key being pressed, for example?
You can use the read command. When you are done you press enter and your script/makefile continues. It's a builtin bash command, so it should work also on MinGW.
My proposition doesn't stop execution but halts and resume display on capable terminals:
Use ctrl-S for halting display, and ctrl-Q for resuming.
You don't need to modify your Makefile.
Pipe the output of the build through more (or less)
e.g.
make <make command line> | more
Or output everything to a file while still watching make progress on screen with your friend tee. I normally prefer this to less or more for bulkier projects.
make <make input arguments> 2>&1 | tee /some/path/build.log

Resources