Passing output of tree through pipeline (BASH) - bash

Sorry if this has been asked before but I couldn't find anything.
I have an issue with a (example) script that needs to do 2 things:
echo the output of the tree command, and echo the same again, but this time passing the output through a pipe to another script.
The first part works fine, I get the expected tree of of directories and files, output on the terminal.
However when passing the output through the pipe, all I get on the other side is the first line. Why is this?
I have tried assigning the output to a temporary file and then using cat on this before passing through, but with no success.
Thanks
example_script:
tree Folder
tree Folder > test.pipe
...
#The other script reads from the pipe like so:
read thisthing < test.pipe
echo $thisthing #I have also tried cat $thisthing

Related

How to input only the first time in a while loop

I have a while-read loop that runs my script in Terminal. If I inserted an echo and read command pair into the script, I'd get prompted for input for each file in the directory that the script is looping through.
I want to avoid this obviously, but at the same time I don't want to have to hard type the target directory that my script is generating CSVs into, which is an inelegant solution and means for each new target directory, the script has to be tweaked again.
This is my while loop command in Terminal:
while read MS; do (cd "$MS" && bash script && cd ..); done <whichMSS.txt
And /targetDirectory/ is the part of the script that needs inputting:
exiftool -csv -Title -Source $PWD > /targetDirectory/${PWD##*/}".csv"
The actual result is that I'd get prompted for input for each file as my script is iterating over them, which kind of defeats the purpose of the while loop. The ideal result would be to input /targetDirectory/ for the first time only and not get prompted any more until all the files have been looped through. I would appreciate any help!

Why does a bash redirection happen *before* the command starts, if the ">" is after the command?

Recently I tried to list all of the images located in a directory I had (several hundred) and put them into a file. I used a very simple command
ls > "image_names.txt"
I was bored and decided to look inside the file and realized that image_names.txt was located in the file. Then I realized, the order of operations performed was not what I thought. I read the command as left to right, in two separate steps:
ls (First list all the file names)
> "image_names.txt" (Then create this file and pipe it here)
Why is it creating the file first then listing all of the files in the directory, despite the ls command coming first?
When you use output redirection, the shell needs a place to put your output( suppose it was very long, then it could all be lost on terminate, or exhaust all working memory), so the first step is to open the output file for streaming output from the executed command's stdout.
This is especially important to know in this kind of command
cat a.txt | grep "foo" > a.txt
since a is opened first and not in append mode it will be truncated, meaning there is no input for cat. So the behaviour you expect that the lines will be filtered from a.txt and replace a.txt will not actually happen. Instead you will just lose the contents of a.txt.
Because redirection > "image_names.txt" was performed before ls command.

Piping input from a file to a command in windows cmd

My understanding is that the redirection operator, <, should allow me to take text from a file and give it as input to another file as if I had written out the contents of that file. Here is what I am trying to do:
python code.py < input.txt
I expect this to act as though I had typed the contents of input.txt after python code.py, but instead it acts as if I passed no input.
If I use cat, I get the contents of the file:
> cat input.txt
['2015-1-1','2015-5-1','2015-9-1','2015-10-1','2015-12-1','2016-1-1','2016-2-1','2016-4-1','2016-5-1'] [65,50,30,45,55,39,45,30,20]
And if I just copy and paste the contents of the file, I get the correct behavior.
I know this must be a really simple misunderstanding on my part, but I can't figure it out.
It's called Redirection, not piping, but you are correct that the < operator will push the file to the command. You can see this in action by using Sort instead of echo.
sort < input.txt
This will display the text file as a list, sorted alphabetically. Echo does not work with text files, so sending a text file to Echo simply runs "Echo".
If you just want to send a file to the command window, you can use Type instead, and not use the redirector.
type input.txt

Read file as input for a command skipping lines

I'm trying to use the contents of a text file as the input for a command. I know how to read a file just fine. However, when I pass the read line to the command I want to execute, the script starts skipping every other line.
Given a plain text file named queue:
one
two
three
four
This prints out each line as expected:
queue=`pwd`/queue
while read input; do
echo $input
done < $queue
output:
one
two
three
four
However, when I pass $input off to the command, every other line is skipped:
queue=`pwd`/queue
while read input; do
echo $input
transcode-video --dry-run $input
done < $queue
output (transcode-video outputs a bunch of stuff, but I omitted that for brevity. I don't believe it is relevant):
one
three
I managed to get my script working by first reading the whole file into an array and then iterating over the array, but I still don't understand why directly looping over the file doesn't work. I'm assuming the file pointer is getting advanced somehow, but I cannot figure out why. transcode-video is a ruby gem. Is there something I'm not aware of going on behind the scenes when the ruby program is executed? The author of the gem provided a sample script that actually strips lines out of the file using a sed command, and that works fine.
Can someone explain what is going on here?
The launched app tries to process stdin, and reads a line. Try:
transcode-video --dry-run $input </dev/null
Or check the manual for a command-line flag that does the job.

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!

Resources