I have a series of images with the file names
motorist_intensity_2.jpg
...
...
motorist_intensity_256.jpg
ad I want to make an animated gif from them using ImageMagick. Now, the command
convert -delay 100 -loop 0 motorist_intensity_* motorist.gif
works, but the frames are out of sorted order. I can produce a sorted file list using
ls motorist_intensity_* | sort -n -t _ -k 3
but how can I pass that list to the covert commad in place of the origial globmotorist_intensity_*?
You can use brace expansion to expand in the order you want:
convert -delay 100 -loop 0 motorist_intensity_{2..256}.gif motorist.gif
The following should work:
convert -delay 100 -loop 0 `ls motorist_intensity_* | sort -n -t _ -k 3` motorist.gif
By putting the command ls ...etc in back quotes, it gets executed and the output is inserted
update
It is preferable to use the $() technique for evaluating an expression (see comments below); also, I am not 100% sure that the output of sort won't include newlines that mess you up. Solving both problems:
convert -delay 100 -loop 0 $(ls motorist_intensity_* | sort -n -t _ -k 3 | xargs echo ) motorist.gif
The xargs echo is a nice shorthand for "Take each of the lines of output of the input in turn, and echo them to the output without the newline". It is my preferred way of converting multiple lines to a single line (although there are many others).
PS - This does not solve the problem you would get if the filename contained a newline. I am assuming they don't...
Related
I'm trying to create a Bash script to calculate MD5 checksum of big files using different process. I learned that one should use & for that purpose.
At the same time, I wanted to capture the results of the check sum in different variables and write them in file in order to read them after.
So, I wrote the following script "test_base.sh" and executed it using the command "sh ./test_base.sh" and the results were sent to the following file "test.txt" which was empty.
My OS is LUBUNTU 22.04 LTS.
Why the "test.txt" is empty?
Code of the "test_base.sh":
#!/bin/bash
md51=`md5sum -b ./source/test1.mp4|cut -b 1-32` &
md52=`md5sum -b ./source/test2.mp4|cut -b 1-32` &
wait
echo "md51=$md51">./test.txt
echo "md52=$md52">>./test.txt
Result of "test.txt":
md51=
md52=
Updated Answer
If you really, really want to avoid intermediate files, you can use GNU Parallel as suggested by #Socowi in the comments. So, if you run this:
parallel -k md5sum {} ::: test1.mp4 test2.mp4
you will get something like this, where -k keeps the output in order regardless of which one finishes first:
d5494cafb551b56424d83889086bd128 test1.mp4
3955a4ddb985de2c99f3d7f7bc5235f8 test2.mp4
Now, if you transpose the linefeed into a space, like this:
parallel -k md5sum {} ::: test1.mp4 test2.mp4 | tr '\n' ' '
You will get:
d5494cafb551b56424d83889086bd128 test1.mp4 3955a4ddb985de2c99f3d7f7bc5235f8 test2.mp4
You can then read this into bash variables, using _ for the interspersed parts you aren't interested in:
read mp51 _ mp52 _ < <(parallel -k md5sum {} ::: test1.mp4 test2.mp4 | tr '\n' ' ')
echo $mp51, $mp52
d5494cafb551b56424d83889086bd128,3955a4ddb985de2c99f3d7f7bc5235f8
Yes, this will fail if there are spaces or linefeeds in your filenames, but if required, you can make a successively more and more complicated command to deal with cases your question doesn't mention, but then you kind of miss the salient points of what I am suggesting.
Original Answer
bash doesn’t really have the concept of awaiting the result of a promise. So you could go with something like:
md5sum test1.mp4 > md51.txt &
md5sum test2.mp4 > md52.txt &
wait # for both
md51=$(awk ‘{print $1}’ md51.txt)
md52=$(awk ‘{print $1}’ md52.txt)
rm md5?.txt
Right now my Script looks like that:
ffmpeg -re -stream_loop -1 -i videos/fitness"$(( RANDOM % 8 ))".mp4
It searches for all videos in my folder that starts with "fitness".
fitness1.mp4
fitness2.mp4
fitness3.mp4
and so on...
and it takes 1 randomly between 1-8 ( im using /fitness"$(( RANDOM % 8 ))".mp4 )
Is there a way to just use a random mp4 file from the folder, no matter whats the name is?
Shuf
Use the shuf command:
shuf -en1 dir/*.mp4
If you don't have shuf (for instance on BSD), you can write your own shuf -en1 very easily:
shufen1() {
shift "$((RANDOM % $#))" # slightly biased towards small numbers, at most 32767
printf %s\\n "$1"
}
Pure bash solution using arrays
For completeness, here is a pure bash solution. However, this has the same problems as the self-written shufen1 function.
a=(dir/*.mp4)
printf %s\\n "${a[RANDOM % ${#a[#]}]}"
Using these solutions
Both commands work under the assumption that there is at least one mp4-file written in lowercase letters. You can use case insensitive matching using shopt -s nocaseglob.
You might want to set shopt -s failglob to get an error in case there is no such file, otherwise the literal string dir/*.mp4 will be printed.
To use any of these solutions, write them into a subshell:
ffmpeg -re -stream_loop -1 -i "$(shuf -en1 videos/*.mp4)"
ffmpeg -re -stream_loop -1 -i "$(a=(videos/*.mp4); printf %s\\n "${a[RANDOM % ${#a[#]}]}")"
find videos -type f -name '*.mp4' | shuf -n1
Find files with the name *.mp4, randomly permute the list of names and output a single filename.
ffmpeg -re -stream_loop -1 -i "$(find videos -type f -name '*.mp4' | shuf -n1)"
I have a (very) large csv file almost around 70GB which I am trying to sort using the sort command. As much as I am trying, the output is not being written to file. Here is what I tried
sort -T /data/data/.tmp -t "," -k 38 /data/data/raw/KKR.csv > /data/data/raw/KKR_38.csv
sort -T /data/data/.tmp -t "," -k 38 /data/data/raw/KKR.csv -o /data/data/raw/KKR-38.csv
What happens is that the KKR_38.csv file is created and its size is the same as the KKR.csv file but there is nothing inside it. When I do
head -n 100 /data/data/raw/KKR_38.csv
It prints out 100 empty lines.
If you sort, it is quite normal the empty lines come first. Try this:
tail -100 /data/data/raw/KKR_38.csv
You can use the following commands if you want to not take into account the empty lines:
cat -s /data/data/raw/KKR_38.csv | less #to squeeze the successive empty lines to only one
or if you want to remove them:
sed '/^$/d' /data/data/raw/KKR_38.csv | less
You can redirect the output of those commands to create another file without the empty line (watch out for the space on your file system).
I'm making a script to auto-generate a url with a random number at a specific location. This will be for calling a JSON API for a random endpoint. The end goal is to generate something like this:
curl -s http://api.openbeerdatabase.com/v1/beers/<RAND_INT>.json | jq '.'
where <RAND_INT> is a randomly-generated number. I can create this random number with the following command:
$ od -An -N2 -i /dev/random
126
I do not know why the 10 extra spaces are in the output. When I chain the above commands together to generate the URL, I get this:
$ echo http://api.openbeerdatabase.com/v1/beers/`od -An -N2 -i /dev/random`.json
http://api.openbeerdatabase.com/v1/beers/ 43250.json
As you see, there is a single extra space in the generated URL. How do I avoid this?
I've also tried subshelling the rand_int command $(od -An -N2 -i /dev/random) but that produces the same thing. I've thought about piping the commands together, but I don't know how to capture the output of the rand_int command in a variable to be used in the URL.
As the comments show, there's more than one way to do this. Here's what I would do:
(( n = $( od -An -N2 -i /dev/urandom ) ))
echo http://api.openbeerdatabase.com/v1/beers/${n}.json
Or, to put it in one line:
echo http://api.openbeerdatabase.com/v1/beers/$(( $( od -An -N2 -i /dev/urandom ) )).json
Or, just use ${RANDOM} instead, since bash provides it, although its values top out at 32767, which might be one reason you preferred your od-based method.
I'm trying to split a very large file to one new file per line.
Why? It's going to be input for Mahout. but there are too many lines and not enough suffixes for split.
Is there a way to do this in bash?
Increase Your Suffix Length with Split
If you insist on using split, then you have to increase your suffix length. For example, assuming you have 10,000 lines in your file:
split --suffix-length=5 --lines=1 foo.txt
If you really want to go nuts with this approach, you can even set the suffix length dynamically with the wc command and some shell arithmetic. For example:
file='foo.txt'
split \
--suffix-length=$(( $(wc --chars < <(wc --lines < "$file")) - 1 )) \
--lines=1 \
"$file"
Use Xargs Instead
However, the above is really just a kludge anyway. A more correct solution would be to use xargs from the GNU findutils package to invoke some command once per line. For example:
xargs --max-lines=1 --arg-file=foo.txt your_command
This will pass one line at a time to your command. This is a much more flexible approach and will dramatically reduce your disk I/O.
split --lines=1 --suffix-length=5 input.txt output.
This will use 5 characters per suffix, which is enough for 265 = 11881376 files. If you really have more than that, increase suffix-length.
Here's another way to do something for each line:
while IFS= read -r line; do
do_something_with "$line"
done < big.file
GNU Parallel can do this:
cat big.file | parallel --pipe -N1 'cat > {#}'
But if Mahout can read from stdin then you can avoid the temporary files:
cat big.file | parallel --pipe -N1 mahout --input-file -
Learn more about GNU Parallel https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1 and walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html