I have downloaded the OpenFace tool for facial landmark extraction. Following the wiki, I am able to extract features of a single video file by running the following on a windows command line interface
./FeatureExtraction.exe -f "...\CREMA-D\VideoFlash\1001_DFA_ANG_XX.flv" -out_dir ".../openface-output" where -f represents the path of my input file.
However, I would like to extract ALL the .flv videos in the ...\CREMA-D\VideoFlash directory at once. Is there an easy way to do this?
Edit:
It is possible to add multiple -f flags e.g. ./FeatureExtraction.exe -f "...\CREMA-D\VideoFlash\1001_DFA_ANG_XX.flv" -f "...\CREMA-D\VideoFlash\1002_DFA_ANG_XX.flv" -f "...\CREMA-D\VideoFlash\1003_DFA_ANG_XX.flv" -out_dir ".../openface-output". Is there a way to perhaps loop through all the possible files and add this flag using a FOR DO loop?
Or potentially do a for loop of ./FeatureExtraction.exe -f "...\CREMA-D\VideoFlash\[file].flv" -out_dir ".../openface-output" for each file in a particular directory
My current hackish solution is to start a for loop and iterate through each file in the input location FOR %i IN ("...\CREMA-D\VideoFlash\*") DO .\FeatureExtraction.exe -f %i -out_dir "...\crema-d\openface-output"
Related
Goal is to convert all .wav files to .mp3 in a different location.
The following code works, but creates output files in the same directory.
All the newly created .mp3's are right alongside the .wav's.
for file in /path/to/*.wav; do lame --preset insane "$file" "${file%.wav}".mp3; done
How can I use terminal to convert a drive full of .wav's with lame and output the .mp3's to a different drive? I've tried changing lame's output, but this syntax grabs the entire filename. Looking for the most simple solution.
From the lame manual, the synopsis is very straightforward:
lame [options] <infile> <outfile>
Found the basic concept here
Assuming that the output files should be placed to /output, possible to extend loop to calculate the output file name using the 'basename'
OUT=/output
for file in /path/to/*.wav; do
# Replace .wav with .mp3
out=${file%.wav}.mp3
# Remove directory (anything up to the last '/'
out=${file##*/}
lame --preset insane "$file" $OUT/$out
done
Im currently using tags such as exiftool -FileModifyDate(<)datetimeoriginal, etc. in terminal/cmd...
Im switching from icloud and the dates in the metadata are exif (meaning finder and windows explorer just see the date they were downloaded)..
It's working but for any sloMo videos that are M4V, they dont change.. I have the originals which do have the right dates and was wondering if there is a way to match file names (123.mp4 = 123.m4v) and copy the metadata over... But I also want to do it in batches. (since every month I will be offloading my iphone every month or so) Thanks!
It will depend upon your directory structure, but your command should be something like this:
exiftool -TagsFromFile %d%f.mp4 "-FileModifyDate<datetimeoriginal" -ext m4v DIR
This assumes the m4v files are in the same directory as the mp4 files. If not, change the %d to the directory path to the mp4 files.
Breakdown:
-TagsFromFile: Instructs exiftool that it will be copying tags from one file to another.
%d%f.mp4: This is the source file for the copy. %d is a exiftool variable for the directory of the current m4v file being processed. %f is the filename of the current m4v file being processed, not including the extension. The thing to remember is that you are processing m4v files that are in DIR and this arguments tells exiftool how to find the source mp4 file for the tag copy. A common mistake is to think that exiftool is finding the source files (mp4 in this case) to copy to the target files (m4v) when exiftool is doing the reverse.
"-FileModifyDate<datetimeoriginal": The tag copy operation you want to do. Copies the DateTimeOriginal tag in the file to the system FileModifyDate.
-ext m4v: Process only m4v files.
Replace DIR with the filenames/directory paths you want to process. Add -r to recurse into sub-directories. If this command is run under Unix/Mac, reverse any double/single quotes to avoid bash interpretation.
I have a script that converts ~8000 files to mp3. I have some files that have a sielar name, where just the extension is diffrent, and they would create all the same .mp3 file with my script at the moment.
So I just want ffmpeg to add something like (x) at the end of the name befor the extension and not ask me every few files.
Thank you!
The flag for overwrite without asking y/n is ( -y ), just include -y in your command.
(-y overwrites files of same name without asking).
Look at ffmpeg docs: https://ffmpeg.org/ffmpeg.html.
To find this flag just search for "overwrite" key word in the docs.
Example to extract an image from video and save it without asking y/n.
ffmpeg -i <video_file_here> -y -ss 00:00:00 -vframes 1 image.jpg)
Now to save file name with a new number each time you can make a bat file with a loop counter variable and increase the number and issue the command with the new number each time.
for a reference on how to do this, you can check one of my questions here. incrementing a counter variable inside a FORLOOP
FFmpeg does not have the functionality you describe.
You need to implement it in your script instead.
Or (on the FFmpeg command line) explicitly specify such output file name that is guaranteed not to exist.
For example, if all your input files are in the same directory, then for an input file named fname.ext use output file named fname.ext.mp3 and place it in a new directory.
I have a folder of data folders with the following structure:
sampleName1-randomNumbers/subfolder1/subfolder2/subfolder3/data1.gz
sampleName1-randomNumbers/subfolder1/subfolder2/subfolder3/data2.gz
sampleName2-randomNumbers/subfolder1/subfolder2/subfolder3/data1.gz
I want to modify all the data.gz within each sample folder by appending the sample name but not the random numbers to get:
sampleName1-randomNumbers/subfolder1/subfolder2/subfolder3/sampleName1_data1.gz
sampleName1-randomNumbers/subfolder1/subfolder2/subfolder3/sampleName1_data2.gz
sampleName2-randomNumbers/subfolder1/subfolder2/subfolder3/sampleName2_data1.gz
It seems like this should be a simple mv for loop but I haven't been able to figure out how to pull part of a folder name using basename.
for i in */Data/Intensities/BaseCalls/*.gz; do mv $i "fastq""/"${i%%-*}"."`basename $i`; done
I couldn't figure out how to make the files stay in their original folder but for my purposes it works to have all the files go to a new folder ("fastq")
I suppose the "sampleName" part doesn't include dashes. In that case, use the standard pattern removal expansion: %%. That is, suppose your full path (relative to directory root) is stored in $path, just do ${path%%-*} to extract the "sampleName" part. Search for %% in the Bash Reference Manual for more details. As a simple example:
> path=sampleName1-randomNumbers/subfolder1/subfolder2/subfolder3/data1.gz
> echo ${path%%-*}
sampleName1
Otherwise, you could also use more advanced substring extraction based on regex. See BashFAQ/100 or Manipulating Strings from the TLDP Advanced Bash Scripting Guide.
Update. Here's the full command to perform the job described, and it is entirely native to the shell:
for file in */Data/Intensities/BaseCalls/*.gz; do
mv "$file" "${file%/*}/${file%%-*}_${file##*/}"
done
My websites file structure has gotten very messy over the years from uploading random files to test different things out. I have a list of all my files such as this:
file1.html
another.html
otherstuff.php
cool.jpg
whatsthisdo.js
hmmmm.js
Is there any way I can input my list of files via command line and search the contents of all the other files on my website and output a list of the files that aren't mentioned anywhere on my other files?
For example, if cool.jpg and hmmmm.js weren't mentioned in any of my other files then it could output them in a list like this:
cool.jpg
hmmmm.js
And then any of those other files mentioned above aren't listed because they are mentioned somewhere in another file. Note: I don't want it to just automatically delete the unused files, I'll do that manually.
Also, of course I have multiple folders so it will need to search recursively from my current location and output all the unused (unreferenced) files.
I'm thinking command line would be the fastest/easiest way, unless someone knows of another. Thanks in advance for any help that you guys can be!
Yep! This is pretty easy to do with grep. In this case, you would run a command like:
$ for orphan in `cat orphans.txt`; do \
echo "Checking for presence of ${orphan} in present directory..." ;
grep -rl $orphan . ; done
And orphans.txt would look like your list of files above, one file per line. You can add -i to the grep above if you want to grep case-insensitively. And you would want to run that command in /var/www or wherever your distribution keeps its webroots. If, after you see the above "Checking for..." and no matches below, you haven't got any files matching that name.