I'm thinking a hypothetical CMDOUTPUT would be useful:
locate -r 'regexp...' # locate finds a file: /myfile.
# Shell puts `/myfile' string into CMDOUTPUT
vim $CMDOUTPUT # No need to run locate again as with: vim `!!`
The locate command above is just an example. I want the output saved for all commands that I run so that if I need it I can access it quickly. (The output should still be printed by the command to stdout.) I don't want to do
CMDOUTPUT="$(...)"
or
command | tee /tmp/cmdoutput
or anything else that I have to do because that's more typing for me at the prompt for everything that I run: I want the shell to do it all in the background. Again, to make it clear: I am casually typing commands away and decide "Oh, I want to use the output of that last command in this command, let me just retrieve it...". Can I tell the shell to store the output somehow so that I can retrieve it.
If there's no option for it, is there some way that I can implement it that is as close to invisible as it can be, meaning exit codes from the command are not lost (...and that's all I can think of, but I'm sure there are other subtleties) etc. I'm primarily thinking of zsh, but answers for any shell would be useful.
I found a solution, not sure if this is exactly what you're looking for. But it should provide a start :)
zsh | tee log >&1
Related
As part of a bigger script I'm using print -z ls to have zsh's input buffer show the ls command. This requires me to manually press enter to actually execute the command. Is there a way to have ZSH execute the command?
To clarify, the objective is to have a command run, keep it in history, and in case another command is running it shouldn't run in parallel or something like that.
The solution I've found is:
python -c "import fcntl, sys, termios; fcntl.ioctl(sys.stdin, termios.TIOCSTI, '\n');
I'm not sure why, but sometimes you might need to repeat the command 2 times for the actual command to be executed. In my case this is happening because I send a process to the background, although this still doesn't make much sense because that process is sending a signal back to the original shell (triggering a trap) which actually calls this code.
In case anyone is interested, this was my goal:
https://gist.github.com/alexmipego/89c59a5e3abe34faeaee0b07b23b56eb
I want to send commands in the ADB shell itself as if i had done the following in cmd.
>adb shell
shell#:/ <command>
I am using python 3.4 on a windows 7 OS 64bit machine. I can send one-line shell commands simply using subprocess.getoutput such as:
subprocess.getoutput ('adb pull /storage/sdcard0/file.txt')
as long as the adb commands themselves are recognized by ADB specifically, such as pull and push, however there are other commands such as grep that need to be run IN the shell, like above, since they are not recognized by adb. for example, the following line will not work:
subprocess.getoutput ('adb shell ls -l | grep ...')
To enter the commands in the shell I thought I needed some kind of expect library as that is what 'everyone' suggests, however pexpect, wexpect, and winexpect all failed to work. they were written for python 2 and after being ported to python 3 and my going through the .py files by hand, even those tweaked for windows, nothing was working - each of them for different reasons.
how can i send the input i want to the adb shell directly?
If none of the already recommended shortcuts work for you you can still go the 'regular' way using 'subprocess.Popen' for entering commands in the adb shell with Popen:
cmd1 = 'adb shell'
cmd2 = 'ls -l | grep ...'
p = subprocess.Popen(cmd1.split(), stdin=PIPE)
time.sleep(1)
p.stdin.write(cmd2.encode('utf-8'))
p.stdin.write('\n'.encode('utf-8'))
p.stdin.flush()
time.sleep(3)
p.kill()
Some things to remember:
even though you import subprocess you still need to invoke subprocess.Popen
sending cmd1 as a string or as items in a list should work too but '.split()' does the trick and is easier on the eyes
since you only specidfied you want to enter input to the shell you only need stdin=PIPE. stdout would only be necessary if you wanted to receive output from the shell
time.sleep(1) isn't really necessary, however since many complained about input issues being faster or slower in python 2 vs 3 consider maybe using it. 'they' might have been using versions of 'expect' that need the shell's reply first. this code also worked when i tested it with simply swapping out and in the process with time.sleep(0)
stdin.write will return an error if the input is not encoded properly. python's default is unicode. entering by binary did not work for me in my tests like this "b\ls ..." but .encode() worked. dont forget the endline!
if you use .encode() there is a worry that the line might not get sent properly, so to be sure it might be good to include a flush().
time.sleep(3) is completely uneccesary, but if your command takes a long time to execute (eg a regressive search through the entire device piped out to a txt file on the memory card) maybe give it some extra time before killing anyhting.
remember to kill. if you didnt kill it, the pipe may remain open, and even after exiting the test app on the console the next commend still went to the shell even though the prompt appearsed to be my regular cmd prompt.
Amichai, I have to start with pointing out that your own "solution" is pretty awful. And your explanation makes it even worse. Doing all those unnecessary things just because you do not understand how shell (here I mean your PC's OS shell, not adb) command parsing works.
When all you needed was just this one command:
subprocess.check_output(['adb', 'shell', 'ls /storage/sdcard0 | grep ...']).decode('utf-8')
This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null
The general idea is pretty simple, I want to make a script for a certain task, I do it in the shell (any shell), and then I want to copy the commands I have used.
If I copy all the stuff in the window, then I have a lot of stuff to delete and to correct. (and is not easy to copy from shell)
Resume: I want to take all the things I wrote...
Is there an easy way to do this easy task?
Update: Partial solution
In bash, the solution is pretty simple, there is a history command, and there are ports of the idea:
IRB: Tweaking IRB
Cmd: Use PowerShell -> Get-History (or use cygwin)
Another Update:
I found that doskey have a parameter history to do this:
cmd: Doskey /history >> history.cmd
Yes, you can use:
history -w filename.sh
This will save your command history to filename.sh. You may need to edit that to keep just the lines at the end that are part of your command sequence.
NOTE: This is a bash command and will not work with all shells.
script may help here. Typing script will throw you into a new shell and save
all input and output to a file called typescript. When you're done with your interaction,
exit the shell. The file typescript is then amenable to grep'ing. For example, you might
grep for your prompt and save the output to the file. If you're a clumsy typist like me, then you may need to do some cleanup work to remove backspaces. There used to be a program that did thisbut I don't seem to find it right now. Here is one I found on the
'net: http://www.cat.pdx.edu/tutors/files/fixts.cpp
This approach is especially useful if you want to track and post on the web an entire interactive session.
I recently discovered 'comint-show-output' in emacs shell mode, which jumps to the first line of shell output, which I find incredibly handy when looking at shell output that exceeds a screen length. The advantages of this command over scrolling with 'page up' are A) you don't have to scan with your eyes for the first line of the output B) you only have to hit the key combo once (instead of 'page up' a number of times which probably is not known beforehand).
I thought about ending all my commands with '| more' but actually this is not what I want since most of the time, I want to retain all output in the terminal buffer, and I usually want to see the end of the shell output first.
I use OSX. Is there a terminal app (on os x) and shell (on remote linux) combination equivalent (so I can do something similar without using emacs all the time - I know, crazy talk)? I normally use bash, but would be fine with switching shells just for this feature.
The way I do this sort of thing is by sending my output to a file and then watching the file as it is written. You still get the results of the command dumped to terminal history in real time and can still inspect the output's actual contents further after the fact (or in another terminal, etc...)
command > output &
tail -f output
head output
You could always do something in bash like this:
alias foo='!! | more'
which would make foo run the previous command with more. I'm not sure of any way to do exactly what you are suggesting.
If you're expecting a lot of output and don't want to run your command twice, you can use tee(1) to fork the output:
my-command | tee /tmp/my-command.log | less
This will pipe the output to a paginator (less), while simultaneously logging the output to a file (in this case, a file named /tmp/my-command.log). If you need to review the output after you've quit from less, you can just cat the log file instead of re-running the command.