this works:
sort <(seq 10)
but this does not work
sort <(seq.sh 10)
where seq.sh looks like
#/bin/bash
seq $1
How I can make seq.sh work for this?
-- edit --
sorry for typo.
seq.sh is excutable and has correct shabang line.
I guess it is related with EOF something.
This works as well with trailing &
sort <(./seq.sh 10 &)
It works if you use seq.sh as a path, for example ./seq.sh or its absolute path. Remember to make it executable.
You may need to add the ! to the shebang.
#!/bin/bash
Also I would make sure that the shell script is executable.
chmod +x seq.sh
lastly, pipe into sort: (the example has an input of 10)
./seq.sh 10 | sort
This was my output:
user#MBP:~/Desktop$ ./seq.sh 10 | sort
1
10
2
3
4
5
6
7
8
9
Related
I am trying to script a procedure but I am really stuck now.
In a folder, I have some files, let say:
000001.dat
000002.dat
000003.dat
000004.dat
000005.dat
000006.dat
000007.dat
000008.dat
000009.dat
000010.dat
In a variable, I have echo $num > 000009.
What I would like to do is to suppress the intermediate file like:
rm -f {000001..${num}}*, but it doesn't work...
If I use rm -f {000001..000009}*, it works! So I think it is a problem when reading the num variable.
Any helps? :)
Thank you in advance!
One way using eval:
$ x=1;y=9
$ echo {$x..$y}
{1..9}
$ eval echo {$x..$y}
1 2 3 4 5 6 7 8 9
So, I am building a bash script which iterates through folders named by numbers from 1 to 9. The script depends on getting the folder names by user input. My intention is to use a for loop using read input to get a folder name or a range of folder names and then do some stuff.
Example:
Let's assume I want to make a backup with rsync -a of a certain range of folders. Usually I would do:
for p in {1..7}; do
rsync -a $p/* backup.$p
done
The above would recursively backup all content in the directories 1 2 3 4 5 6 and 7 and put them into folders named as 'backup.{index-number}'. It wouldn't catch folders/files with a leading . but that is not important right now.
Now I have a similar loop in an interactive bash script. I am using select and case statements for this task. One of the options in case is this loop and it shall somehow get a range of numbers from user input. This now becomes a problem.
Problem:
If I use read to get the range then it fails when using {1..7} as input. The input is taken literally and the output is just:
{1..7}
I really would like to know why this happens. Let me use a more descriptive example with a simple echo command.
var={1..7} # fails and just outputs {1..7}
for p in $var; do echo $p;done
read var # Same result as above. Just outputs {1..7}
for p in $var; do echo $p;done
for p in {1..7}; do echo $p;done # works fine and outputs the numbers 1-7 seperated with a newline.
I've found a workaround by storing the numbers in an array. The user can then input folder names seperated by a space character like this: 1 2 3 4 5 6 7
read -a var # In this case the output is similar to the 3rd loop above
for p in ${var[#]}; do echo $p; done
This could be a way to go but when backing up 40 folders ranging from 1-40 then adding all the numbers one-by-one completely makes my script redundant. One could find a solution to one of the millennium problems in the same time.
Is there any way to read a range of numbers like {1..9} or could there be another way to get input from terminal into the script so I can iterate through the range within a for-loop?
This sounds like a question for google but I am obviously using the wrong patterns to get a useful answer. Most of similar looking issues on SO refer to brace and parameter expansion issues but this is not exactly the problem I have. However, to me it feels like the answer to this problem is going in a similar direction. I fail to understand why when a for-loop for assigning {1..7} to a variable works but doing the same like var={1..7} doesn't. Plz help -.-
EDIT: My bash version:
$ echo $BASH_VERSION
4.2.25(1)-release
EDIT2: The versatility of a brace expansion is very important to me. A possible solution should include the ability to define as many ranges as possible. Like I would like to be able to choose between backing up just 1 folder or a fixed range between f.ex 4-22 and even multiple options like folders 1,2,5,6-7
Brace expansion is not performed on the right-hand side of a variable, or on parameter expansion. Use a C-style for loop, with the user inputing the upper end of the range if necessary.
read upper
for ((i=1; i<=$upper; i++)); do
To input both a lower and upper bound separated by whitespace
read lower upper
for (i=$lower; i <= $upper; i++)); do
For an arbitrary set of values, just push the burden to the user to generate the appropriate list; don't try to implement your own parser to process something like 1,2,20-22:
while read p; do
rsync -a $p/* backup.$p
done
The input is one value per line, such as
1
2
20
21
22
Even if the user is using the shell, they can call your script with something like
printf '%s\n' 1 2 20..22 | backup.sh
It's easier for the user to generate the list than it is for you to safely parse a string describing the list.
The evil eval
$ var={1..7}
$ for i in $(eval echo $var); do echo $i; done
this also works,
$ var="1 2 {5..9}"
$ for i in $(eval echo $var); do echo $i; done
1
2
5
6
7
8
9
evil eval was a joke, that is, as long as you know what you're evaluating.
Or, with awk
$ echo "1 2 5-9 22-25" |
awk -v RS=' ' '/-/{split($0,a,"-"); while(a[1]<=a[2]) print a[1]++; next}1'
1
2
5
6
7
8
9
22
23
24
25
I have a script which uses parameters. In the shell I run the script like this: ./script 1 2 3 4. I prefer to use a file which contain 1 2 3 4 in a single line and run: ./script `cat file`.
After I call this script in a for loop like this: for i in `./script `cat file` ` but it doesn't work. What is the good syntax?
You cannot nest tilde (command substitution) like this. You can do this bash:
for i in $(./script $(<file))
$(<file) is another way of getting the output of $(cat file)
Use $() instead of \`` for Command Substitution. This, among other things, is one of the many reasons it is better.
That being said using cat file to get a list of words is, at best, a poor idea and, at worst, a broken (and potentially dangerous) one. It will not work with any words that require spaces or use shell globbing characters.
I suggest not doing it in the first place.
man seq
generate a sequence of numbers.
#!/bin/bash
for i in $(seq 1 10)
do
echo $i
done
Is there any way of counting and recording the number of arguments passing through a pipe? I am piping a values from a file of unknown length. I can dump the number to STDOUT using tee but cannot get them into a variable:
seq 10 | tee >(wc -l) | xargs echo
I'm interested in whether this is possible for aesthetics and my own understanding rather than some roundabout alternative such as rescanning the (non-txt) file_of_unknown length twice, or writing to an actual file then reading back in, etc.
Thanks!
A component in a pipeline (such as the tee in the OP) is executed in a subshell. Consequently, it cannot modify the parent shell's variables. That's a general rule about subshells and shell variables. Every shell has its own variables; when a (sub)shell is started, (some of) the parent shell's variables are copied into the child shell, but there is no link between variables in the parent and variables in the child.
You can communicate between subshells using pipes, but in a pipeline you are already doing that so it will be difficult to coordinate another layer, although it's not impossible. The aesthetics are not great, though.
So basically your best approach is exactly the one you explicitly discard, "writing to an actual file then reading back in". That's simple and reliable, and it's quite fast, particularly if the file is on an in-memory temporary filesystem (see tmpfs).
By the way, xargs normally splits input lines at whitespace to generate arguments, so in general the number of arguments it receives is not the same as the number of lines it reads.
Is this what you are looking for?
sh$ seq 10 | tee >(wc -l > /tmp/f) | xargs echo; V=$(cat </tmp/f)
1 2 3 4 5 6 7 8 9 10
sh$ echo $V
10
If you don't like the use of the temporary file (I should have used mktemp...), one might use a fifo (mkfifo) instead?
If it is acceptable to use stderr as output, you might rewrite your example as:
sh$ V=$(seq 10 | tee >(xargs echo 1>&2) | wc -l)
1 2 3 4 5 6 7 8 9 10
sh$ echo $V
10
I'm struggling to understand command redirection/reuse...
I understand there's the <(...) <(...) methodology and the $( ... && ... ) techniques for combining output. But I don't really fully understand what the difference is (I realize that the (...) dumps you inside a new shell, causing you to potentially jump dirs and lose any vars you haven't exported, but I'm unsure of how it effects the general redirection scheme) and I'm still quite confused as to how to do one-to-many redirection after skimming over the examples and instructions in:
Advanced Bash Scripting Guide 1
Advanced Bash Scripting Guide 2
My own attempts to play around with it have mostly resulted in "ambiguous redirect" errors.
For example, let's say I want to do a one liner given by the pseudocode below
CMD 1 && CMD 2 && CMD 3 --> (1)
CMD 4 (1) --> (2)
CMD 5 (1) --> CMD 6 --> (3)
CMD 7 (2) && CMD 8 (3) --> (4)
CMD 9 (2) --> (5)
CMD 10 (2) (3) -->(6)
VAR= echo (4) && echo (5) && echo (6)
Or as a process diagram
CMD 1 +CMD 2 && CMD 3
|\
| ---> CMD 5 ------> CMD 6-----\
V / V
CMD 4 ----------------u--------> CMD 10
| \ V /
| -------->CMD 7 + CMD 8 /
V | /
CMD 9 | /
\ | /
\ V /
--------> VAR <----------
Where outputs are designated as -->; storage for reuse in another op is given by -->(#); and combination operations are given by &&.
I currently have no idea how to do this in a single line without redundant code.
I want to truly master command redirection so I can make some powerful one-liners.
Hopefully that's clear enough... I could come up with proof of concept examples for the commands, if you need them, but the pseudocode should give you the idea.
In reply to your comment:
sort <(cd $CURR_DIR && find . -type f -ctime $FTIME) \
<(cd $CURR_DIR && find . -type f -atime $FTIME) \
<(cd $CURR_DIR && find . -type f -mtime $FTIME) | uniq
can be written as (which I believe is clearer)
(find . -type f -ctime $FTIME && find . -type f -atime $FTIME \
&& find . -type f -mtime $FTIME) | sort | uniq
Given three programs one which produces "a" or "z" as output. Produce a string that contains the sorted output and also the unique output as a one liner:
mkfifo named_pipe{1,2,3}; (echo z ; echo a ; echo z) > named_pipe1 & \
tee < named_pipe1 >(sort > named_pipe2) | sort | uniq > named_pipe3 & \
output=$(echo sorted `cat named_pipe2`; echo unique `cat named_pipe3`); \
rm named_pipe{1,2,3}
Produces sorted a z z unique a z
One thing you may notice about this solution is that its been split up so that each grouping of commands has its own line. I suppose what I'm getting at here is that one liners can be cool, but often clarity is better.
The mechanism by how this works is use of a program called tee and named pipes. A named pipe is exactly like an anonymous pipe eg. cat words.txt | gzip, except that it can be referenced from the file system (but no actual data is written to the file system). Note that writing to a named pipe will block until another process is reading from the named pipe. I've been gratuitous with my use of pipes here, just so you can get a feel of how to use them.
Tee, as noted by others, can replicate input to several outputs.
There's some kind of beginning of answer to your problem in this post:
How can I send the stdout of one process to multiple processes using (preferably unnamed) pipes in Unix (or Windows)?
Using tee and >(command). But your example is rather sophisticated, and it's hard to imagine a one-line solution, without temp storage (var or file), by only combining commands (the Unix way, yes!).
I mean, if you agree to have several commands launched sequentially, it may be easier (but it won't be a one-liner any longer)...
Even writing this kind of expression with a more complex language (say python) would be complicated.
You have several different problems to solve.
You have one output that you need as input for several other commands. You can solve this with tee >(cmd1) | cmd2. See this answer: How can I send the stdout of one process to multiple processes using (preferably unnamed) pipes in Unix (or Windows)?
An alternative is to create a file. I usually prefer the file approach because it allows to debug the final script. To avoid leaving a mess, create a temporary work directory which you delete with trap "..." EXIT
You need to combine the outputs of several commands as the input to a single command. Use sub shells here: ( CMD 1 ; CMD 2 ) | CMD 3 combines the output of 1 and 2 as input for 3.
You need a mix of the two above. You can create additional file descriptors but each can be read only once. Use cat to combine them and tee to create copies.
This should all be possible but there is a drawback: If something doesn't work or goes wrong, you will never be able to find the bug.
Therefore: Create a work directory and use files. If you put this into /tmp, the data won't be written to disk (on modern Linux systems, /tmp is a ram disk), this will behave like pipes except when you have huge amounts of data plus you will be able to maintain the script.