Automatic yes command when patching? - terminal

I am trying to patch a kernel with the following command:
patch -p1 < 0001-Linux-3.4.4.patch
However i keep receiving y/n? responses such as:
The next patch would create the file arch/arm/mach-at91/pm_slowclock.S,
which already exists! Assume -R? [n]
I have tried to solve this issue by automating it with this command:
yes | patch -p1 < 0001-Linux-3.4.4.patch
However the terminal still prompts me with a y/n? response.
Can anyone help me out here? Thanks

You are trying to supply standard input to the patch command from two places at once: a pipe (|) from the output of yes and redirection (<) from a patch file. The redirection is performed after the pipeline is set up, so standard input will come from the patch file, not from the pipeline.
patch does not read the answers to its questions from standard input, it reads them directly from the controlling terminal device.
patch has a couple of options to skip asking questions:
-f or --force will assume that patches are not reversed.
-t or --batch will assume that patches that look reversed are reversed.
There is also a -R or --reverse option to explicitly indicate that the patch is reversed.

Related

Unable to install sdkman on macos

I am unable to install sdkman on my macos. I referred sdkman install and Can't install sdkman on Mac OS. Still, I am missing something. Can someone please help me ? I am new to MacOS and sdkman.
When I go to bash terminal and type curl -s "https://get.sdkman.io" | bash , it prints message failed to write body on terminal and opens my bash profile. What is that I am supposed to do next? I tried to follow steps mentioned at above urls, even used source as suggessted but I guess something is missing. I actually never write anything in bash profile, so source would not even do anything. I did multiple attempts using what I found online but sdk version never gives any output, it kept saying sdk command not found. I found online that I needed to upgrade curl, I even did that still no success. Can someone please write / explain steps for me that I am missing? I would appreciate it. I did search online, but either steps are not clear or I am not getting something right. Thanks.
It looks more likely that the piped bash closes the read pipe before the previous curl finishes writing the whole page. When you issue curl -s "https://get.sdkman.io" | bash, as soon as the piped bash has what it wants, it will right away close the input stream from the previous curl. But the cURL doesn’t really expect this and throws a “failed writing body” error. You might want to try piping the stream through an intermediary program that always reads the whole page before feeding to bash. For instance, you can try something like this (running tac twice before piping to bash):
curl -s "https://get.sdkman.io" | tac | tac | bash
tac is a Unix program that can concatenate and print files in reverse. In this case, it reads the entire input page and reverses the line order (hence we run it twice). Because it has to read the whole input to find the last line, it will not output anything to bash until cURL is finished. bash will still close the read stream when it gets what it needs, but it will only affect tac, which doesn't throw an error.

Unix side-by-side difference with a remote hidden file

I am working on a battery of automatic tests which executes on 2 Unix virtual machines running with KSH. Those VMs are independant and they have practically the same .profile file. I would like to study their differences by launching:
tkdiff /usr/system/.profile system#{external_IP}:/usr/system/.profile
on the first VM but it doesn't work.
I suppose that directly accessing a hidden file is not possible. Is there a solution to my problem, or maybe an alternative?
If you want to compare different files on two remote machines, I suggest the following procedure:
1. Compare checksums:
First compare the checksums. Use sum, md5sum or sha256sum to compute a hash of the file. If the hash is the same, the probability of having the same file is extremely high! You can even increase that probability by check the total amount of characters, lines and words, in the file using wc.
$ file="/usr/system/.profile"
$ md5sum "$file" && wc "$file"
$ ssh user#host "md5sum '$file' && wc '$file'"
2. run a simple diff
Run a simple diff using the classic command line tools. They understand the POSIX standard to use - as /dev/stdin. This way you can do:
$ ssh user#host "cat -- '$file'" | diff "$file" -
note: with old versions of tkdiff or new versions of svn/git, it can be tricky here due to bugs in tkdiff. It will quickly throw errors of the form svn [XXXX] file .... is not a working copy or file xxxx is not part of a revision control system if one of the files might be under version control or you end up in a directory under version control. Stick to diff!
You are using the filename convention "user#host:/path/to/file" for the second argument to tkdiff.
That convention for naming is not native to Ksh, but instead is understood by some programs like scp and others (which can be interactive, e.g. to ask for a password for the remote system or other authentication related questions).
But from the tkdiff man page, it does not mention having built-in support for that filenaming convention userid#host:/path/to/file, and neither is such support built into ksh.
So you may need to use two steps, first to use scp or similar to copy the remote file locally then then use tkdiff with one argument the local file and the other the file-just-copied, or arrange to mount part of the other VM filesystem locally, and then use tkdiff with appropriate arguments.
Obviously, both files need to be readable by your userid or the user specified on the userid#host:/path/to/file for this to work.
You can directly made a remote ssh compare , run a remote display with help of cat command line, with this :
tkdiff <(ssh system#{external_IP}1 'cat /usr/system/.profile') <(ssh system#{external_IP}2 'cat /usr/system/.profile')
In your case to be able to compare with the local .profile file this :
tkdiff /usr/system/.profile <(ssh system#{external_IP} 'cat /usr/system/.profile')
Do you have just try with the simple diff command line (with -b -B option to remove blank line and space comparaison):
diff -b -B /usr/system/.profile <(ssh system#{external_IP} 'cat /usr/system/.profile')

What is the difference between these two Bash commands?

What's the difference between these two Bash commands? :
bash <(curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered)
curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered | bash
The first command gave me this prompt:
Are you really sure you want to do this ? (y/N) ?
but the second did not.
In the first command, bash inherits its standard input from its parent. Assuming you typed the command at your prompt, the parent would be your interactive shell, whose standard input is (in the absence of any other change) your terminal emulator.
In the second command, bash's standard input is the output of curl, not a terminal, which means the standard input of the script executed by bash is also the output of curl.
Whatever command is asking for confirmation only does so if it detects that standard input is a terminal. Worse, if the script is trying to read from standard input, it may actually consume part of itself, if it wins the race condition with bash for reading from the pipe.
The correct thing to do (and the secure thing) is to save the output of curl to a file first, then verify what it is you are running before actually doing so.
curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered > update-script
# look at update-script
bash update-script
By "look", I mean either visually inspect the output, or at least compare a locally computed checksum with a checksum provided by the source to ensure that the bytes you received are the bytes that you were supposed to get. (This guards agains network corruption, man-in-the-middle attacks, etc.)

Echoing to a command after a delay in Bash

I want to copy a .bin file in a .img file using mcopy. For this, I can use mcopy -i image.img bin.bin ::. When using this, it will tell me: Long file name "bin.bin" already exists.
a)utorename A)utorename-all r)ename R)ename-all o)verwrite O)verwrite-all
s)kip S)kip-all q)uit (aArRoOsSq):. Due to the size and importance of stable files in this project, I just always want to put in O (small size and no importance, just so you know).
So I searched, and found this could be done by using the command: echo "O" | mcopy -i image.img bin.bin ::. Great. However, mcopy has a slight delay, due to which the echo does NOT enter O on the right time (too soon). I tried to use { sleep 2; echo "O"; } | mcopy -i image.img bin.bin ::, which helps nothing either.
So: How to actually echo text to a command after a delay, using bash?
(For the comments: adding -n to the mcopy command does neither work)
EDIT: There seemed to be some confusion about the purpose of the question, so I will try to clarify it. Point is, I have a problem and I want it solved. This could be done by using mcopy in an alternative way, as proposed in the comments already, OR by delaying the echo to the command (as is the question).
Even if my problem is solved in a way where the mcopy command is altered, that still would not answer the question. So please keep that in mind.
You're asking the wrong question, and you already know the answer to the question you're asking.
For the question "How to actually echo text to a command after a delay, using bash?", the answer is precisely:
{ sleep $DELAY; echo $TEXT; } | command
However, that should hardly ever be necessary. It provides the given text to command's standard input after the given delay, which may cause the command to wait a bit before proceeding with the read input. But there is (almost) never a case where the data needs to be delayed until the command is already waiting for it -- if the command is reading from standard input.
In the case of mtools, however, mcopy is not reading the clash code from standard input. Instead, it is reading it directly from /dev/tty, which is the terminal associated with the command. Redirecting standard input, which is what the bash pipe operator does, has no effect on /dev/tty. Consequently, the problem is not that you need to delay sending data to mcopy's standard input; the problem is that mcopy doesn't use standard input, and bash has no mechanism to hijack /dev/tty in order to fake user input.
So the other question might be "how to programmatically tell mcopy which clash option to use?", but apparently you know the answer to that one, too: use the -D command line option (which works with all relevant mtools utilities).
Finally, a more complicated question: "Is there some way to automate a utility which insists on reading input from /dev/tty?" Here, the answer is "yes" but the techniques are not so simple as just piping. The most common way is to use the expect utility, which allows you to spawn a subprocess whose /dev/tty is a pseudo-tty which expect can communicate with.

Can colorized output be captured via shell redirect? [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Resources