What is mean that `grep -m 1 ` command in UNIX - shell

I googled this command but there was not.
grep -m 1 "\[{" xxx.txt > xxx.txt
However I typed this command, error didn't occured.
Actually, there was not also result of this command.
Please anyone explain me this command's working?

This command reads from and writes to the same file, but not in a left-to-right fashion. In fact > xxx.txt runs first, emptying the file before the grep command starts reading it. Therefore there is no output. You can fix this by storing the result in a temporary file and then renaming that file to the original name.
PS: Some commands, like sed, have an output file option which works around this issue by not relying on shell redirects.

Related

How to pipe output of a command that expects file argument

I know you can pipe output of a one command to another - for example
ls -la | less
to see output of ls -la inside of less instead of terminal stdio.
But if you use a command with a parameter that saves the output to a file
command --save-to-file file.txt
Then how to pipe that to another command?
this way will not work:
command --save-to-file | less
because command will complain that you used --save-to-file without any argument (filename)
If I remember well there was something like a buffer or a temp file in ram you could put instead of file.txt so you could do something like:
command --save-to-file ram-buffer.txt && cat ram-buffer.txt
Without even creating a file on disk, it that right?
Why I need that?
Some of commands have only a basic output to the stdio and more useful type of output cannot be printed by them but only saved to file. The thing is I am not interested in saving the more useful type of output to any file at all but just to print it in terminal or pipe to chain of another commands that do the filtering etc and then eventually print the processed output.
I would not like to be responsible to crating a tmp file then delete it etc. Perfectly I would like to just use a kind of magic file (or redirection) in place of file.txt that I could pipe to another command.
It is important to me to not write any content of the output to a disk if this is possible. Just print it in terminal or pipe to other command(s).
At this moment I'm trying to capture output of PHPUnit
phpunit --log-junit log.xml
which is not a shell command but a PHP script that uses:
#!/usr/bin/env php
But I remember I used to have an example with linux command that I wanted get the output but the form of it was only available with a parameter --save-to-file outputfile.txt
Perhaps because piping/redirecting an output designed to be saved to a file is not binary safe and therefore such output can be corrupted when piped/redirected - can it be?
Some programs have special handling for -. For example, you can tell tar to write to stdout so it can be used in a pipeline. This would create a tarball locally and untar it remotely without the tarball ever being written to disk:
tar -cf - *.txt | ssh user#host tar -C /dir/ -xf -
You can use /dev/stdout with nearly all programs, as long as they don't need a seekable file.
command --save-to-file /dev/stdout
As #Benjamin W. pointed out in the comments, you can save it to /dev/stdout, which is the output and then pipe the output to whatever you want(e.g. less)
command --save-to-file /dev/stdout | less
Take care because there may be additional output to stdout. In this case you could throw that away, save it to stderr and redirect it to stdout:
command --save-to-file /dev/stderr >/dev/null 2>/dev/stdout | less
If both, stderr and stdout are used you may be able to write your own driver for this of manipulate /proc/pid/mem or something like this.

Pipe between two text files in BASH

I'm new to all this(one day old bash coder) so as much as stupid this question might sound, please take your time and respond accordingly :)
I created a script in BASH that should do some things to a given input(which is a text file). I'm having some hard time trying to figure out how to use pipeline to run the script(which is called cleanLines, for the sake of the question) on a text file (named test.txt) so that it'll "clean his lines".
I added the below line of code to the top of my script file(cleanLines):
PATH=${PATH[*]}:.
Now what?
./test.txt | ./cleanLines.txt
Doesn't seem to work.
I should note that the files are in the same directory, if that worths anything to you.
EDIT: Oh and cleanLines is also a text file(.txt).
And it gives me the error:
-bash: ./cleanLines.txt: Permission denied
A pipe | redirects the output of a command, but test.txt is not a command. You can read more about redirection here or here. To redirect input from a file you would do this instead:
cleanLines < test.txt
This assumes that your script is executable and expecting input from standard input.

Remove whitespaces in shell .SH file

Due to processes out of my control I need run multiple SH files which contains lengthy CURL commands. Problem is that whichever process created these commands seems to have included one line of whitespace at the very end. If I call it as is - it fails. If I physically open the file and hit backspace on the first full empty line and save the file - it works perfectly.
Any way to put some kind of command into the SH file so that it removes any unnecessary stuff?
More info would be helpful, but the following might work:
If you need to put something into each of the files that contain the curl commands as you mention, you could try putting exit as the last line of the curl script (also depends on how you're calling the 'curl files'
exit
If you can run a separate script against the files that have a blank line, perhaps sed the blank lines away?
sed -i s/^\s$// $fileWithLineOfSpaces
edit:
Or (after thinking about it), perhaps simply delete the last line of the file....
sed -i '$d' $file

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!

Diff output from two programs without temporary files

Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.

Resources