I would like to convert efficiently a couple of jpeg Images contained in a tar.gz to an x264 mp4 movie.
gzip -cd Monitor-1-xx.tar.gz|cpio -i --to-stdout|jpegtopnm|ppmtoy4m -F 4:1| \
> x264 --crf 24 -o Monitor-1-xx.mp4 --stdin y4m -
The problem here is that, after cpio I have multiple jpg files in a single stream and jpegtopnm only converts the first one.
I would like to find a function to split the stream (or to get it pre-split). Then I would like to run jpegtopnm multiple times for each split. It is somewhat like what xargs does when I untar to disk first. Writing to disk is something I am trying to eschew:
mkdir tmpMonitor && cd tmpMonitor && tar -xf ../Monitor-1-xx.tar.gz
find . -iname "*.jpg"|xargs -n1 jpegtopnm|ppmtoy4m -F 4:1| \
x264 --crf 24 -o ../xx.mp4 --stdin y4m -
cd .. && rm -rf tmpMonitor
Any suggestions?
tar has a couple of options that may be useful here (I have GNU tar, so I apologize in advance for assuming you do in case you actually don't):
--wildcards - lets you pick files to extract from the tar using globs like *.jpeg
--to-command - pipe each extracted file to the given command.
So maybe something like this?
tar -xzf Monitor-1-xx.tar.gz --wildcards '*.jpeg' \
--to-command="jpegtopnm|ppmtoy4m -F 4:1| x264 --crf 24 -o ../xx.mp4 --stdin y4m -"
Well I don't know much about x264 so do consider that untested code. I tested this using simple .txt files instead of .jpegs and cat -n instead of jpegtopnm etc. The other thing is, I am guessing you want separate output files (one per jpeg), so it looks to me like ../xx.mp4 won't do... So assuming you want separate invocations of jpegtopnm|ppmtoy4m -F 4:1| x264 --crf 24 -o ../xx.mp4 --stdin y4m - for each file then you want a different output filename for -o right? - In which case, the following hack might work:
tar -xzf Monitor-1-xx.tar.gz --wildcards '*.jpeg' \
--to-command="jpegtopnm|ppmtoy4m -F 4:1| x264 --crf 24 -o ../xx-`date +%H%M%S%N`.mp4 --stdin y4m -"
Related
So I have 20 subfolders full of files in my main folder and have around 200 files in every subfolder. I've been trying to write a script to convert every picture in every subfolder to DNG.
I have done some research and was able to batch convert images from the current folder.
I've tried developping the idea to get it to work for subfolders but to no success.
Here is the code I've written:
for D in 'find . -type d'; do for i in *.RW2; do sips -s format jpeg $i --out "${i%.*}.jpg"; cd ..; done; done;
The easiest and fastest way to do this is with GNU Parallel like this:
find . -iname \*rw2 -print0 | parallel -0 sips -s format jpeg --out {.}.jpg {}
because that will use all your CPU cores in parallel. But before you launch any commands you haven't tested, it is best to use the --dry-run option like this so that it shows you what it is going to do, but without actually doing anything:
find . -iname \*rw2 -print0 | parallel --dry-run -0 sips -s format jpeg --out {.}.jpg {}
Sample Output
sips -s format jpeg --out ./sub1/b.jpg ./sub1/b.rw2
sips -s format jpeg --out ./sub1/a.jpg ./sub1/a.RW2
sips -s format jpeg --out ./sub2/b.jpg ./sub2/b.rw2
If you like the way it looks, remove the --dry-run and run it again. Note that the -iname parameter means it is insensitive to upper/lower case, so it will work for ".RW2" and ".rw2".
GNU Parallel is easily installed on macOS via homebrew with:
brew install parallel
It can also be installed without a package manager (like homebrew) because it is actually a Perl script and Macs come with Perl. So you can install by doing this in Terminal:
(wget pi.dk/3 -qO - || curl pi.dk/3/) | bash
Your question seems confused as to whether you want DNG files like your title suggests, or JPEG files like your code suggests. My code generates JPEGs as it stands. If you want DNG output, you will need to install Adobe DNG Converter, and then run:
find . -iname \*rw2 -print0 | parallel --dry-run -0 \"/Applications/Adobe DNG Converter.app/Contents/MacOS/Adobe DNG Converter\"
There are some other options you can append to the end of the above command:
-e will embed the original RW2 file in the DNG
-u will create the DNG file uncompressed
-fl will add fast load information to the DNG
DNG Converter seems happy enough to run multiple instances in parallel, but I did not test with thousands of files. If you run into issues, just run one job at a time by changing to parallel -j 1 ...
Adobe DNG Converter is easily installed under macOS using homebrew as follows:
brew install caskroom/cask/adobe-dng-converter
I can run:
echo "asdf" > testfile
tar czf a.tar.gz testfile
tar czf b.tar.gz testfile
md5sum *.tar.gz
and it turns out that a.tar.gz and b.tar.gz have different md5 hashes. It's true that they're different, which diff -u a.tar.gz b.tar.gz confirms.
What additional flags do I need to pass in to tar so that its output is consistent over time with the same input?
tar czf outfile infiles is equivalent to
tar cf - infiles | gzip > outfile
The reason the files are different is because gzip puts its input filename and modification time into the compressed file. When the input is a pipe, it uses an empty string as the filename and the current time as the modification time.
But it also has a --no-name option, which tells it not to put the name and timestamp into the file. So if you write the expanded command explicitly, instead of using the -z option to tar, you can make use of this option.
tar cf - testfile | gzip --no-name > a.tar.gz
tar cf - testfile | gzip --no-name > b.tar.gz
I tested this on OS X 10.6.8 and it works.
For MacOS:
In man tar we can look at --options section and there we will find !timestamp option, which will exclude timestamp from our gzip archive. Usage:
tar --options '!timestamp' -cvzf archive.tgz filename
It will produce same md5 sum for same files with same names
I want to convert my older *.wma files into *.mp3. For that purpose I found a short script to convert with using mplayer + lame (found here: https://askubuntu.com/questions/508625/python-v2-7-requires-to-install-plugins-to-play-media-files-of-the-following-t).
This works fine in a single directory. Now I wanted to improve it that way, that it's able to work with 'find'. Its intended to find a *.wma-file and then calling the script to convert that file to *.mp3.
Here is the script:
FILENAME=$1
FILEPATH="$(dirname $1)"
BASENAME="$(basename $1)"
mplayer -vo null -vc dummy -af resample=44100 -ao pcm:waveheader "$FILENAME"
lame -m j -h --vbr-new -b 320 audiodump.wav -o "`basename "$FILENAME" .wma`.mp3"
echo "Path: $FILEPATH" # just to see if its correct
echo "File: $BASENAME" # just to see if its correct
rm -f audiodump.wav
rm -f "$FILENAME"
At the moment I'm dealing with the issue, that the script put the converted *.mp3 in the directory which the console is working with (e.g. /home/user/ instead of /home/user/files/ where the *.wma comes from).
What can I do to let the script putting the new *.mp3 into the same directory as the *.wma?
If I want to use 'mv' within the script I get trouble with embedded spaces in the *.wma-filenames.
Thanks for any hints. I thought about setting the IFS to tab or newline, but I wounder if there is a better way to deal with this.
Here's something that uses ffmpeg for the conversion (after using ffprobe for figuring out what the bit_rate should be). It's based off of what I found in (https://askubuntu.com/questions/508278/how-to-use-ffmpeg-to-convert-wma-to-mp3-recursively-importing-from-txt-file). But I didn't have access to avprobe, so had to hunt for an alternative.
First navigate to the directory with all your files and run the following from your shell:
find . -type f | grep wma$ > wma-files.txt
Once that's done, you can put this into a script and run it:
#!/usr/bin/env bash
readarray -t files < wma-files.txt
ffprobe=<your_path_here>/ffprobe
ffmpeg=<your_path_here>/ffmpeg
for file in "${files[#]}"; do
out=${file%.wma}.mp3
bit_rate=`$ffprobe -v error -show_entries format=bit_rate -of default=noprint_wrappers=1:nokey=1 "$file"`
$ffmpeg -i "$file" -vn -ar 44100 -ac 2 -ab "$bit_rate" -f mp3 "$out"
done
This will save the mp3 files alongside the wma ones.
The problem is that basename is stripping both the .wma extension and the path leading to the file. And you only want the .wma stripping.
So the answer is not to use basename and instead just do the .wma stripping yourself (with Parameter Expansion).
outfile=${FILENAME%.wma}
lame -m j -h --vbr-new -b 320 audiodump.wav -o "$outfile.mp3"
(Note that I used lowercase $outfile. Generally $ALL_CAPS variables are reserved for the shell/terminal/environment and should be avoided in scripts.)
How can I untar all tar files in one command using Putty.
I Tried the following but its not un-tarring (all files start with alcatelS*)
tar -xfv alcatelS*.tar
It is not working i don't get no errors and it is not un-tarring.
Thank you,
-xfv is wrong since v is being referred as the file instead. Also, tar can't accept multiple files to extract at once. Perhaps -M can be used but it's a little stubborn when I tried it. Also, it would be difficult to pass multiple arguments that were extracted from pathname expansion i.e. you have to do tar -xvM -f file1.tar -f file2.tar.
Do this instead:
for F in alcatelS*.tar; do
tar -xvf "$F"
done
Or one-line: (EDIT: Sorry that -is- a "one"-liner but I find that not technically a real one-liner, just a condensed one so I should haven't referred to that as a one-liner. Avoid the wrong convention.)
for F in alcatelS*.tar; do tar -xvf "$F"; done
You can use following command for extract all tar.gz files in directory in unix
find . -name 'alcatelS*.tar.gz' -exec tar -xvf {} \;
Following is my favorite way to untar multiple tar files:
ls *tar.gz | xargs -n1 tar xvf
Can be done in one line:
cat *.tar | tar -xvf - -i
I am have really huge folder I would like to gzip and split them for archive:
#!/bin/bash
dir=$1
name=$2
size=32000m
tar -czf /dev/stdout ${dir} | split -a 5 -d -b $size - ${name}
Are there way to speed up this with gnu parallel?
thanks.
It seems the best tool for parallel gzip compression is pigz. See the comparisons.
With it you can have a command like this:
tar -c "${dir}" | pigz -c | split -a 5 -d -b "${size}" - "${name}"
With its option -p you could also specify the number of threads to use (default is the number of online processors, or 8 if unknown). See pigz --help or man pigz for more info.
UPDATE
Using GNU parallel you could do something this:
contents=("$dir"/*)
outdir=/somewhere
parallel tar -cvpzf "${outdir}/{}.tar.gz" "$dir/{}" ::: "${contents[#]##*/}"