I found on another question how to do singular files, which worked great.
"lame --mp3input -b birtratenumber input.mp3 output.mp3"
Thing is, I have around 60 files and doing each individually is very time consuing (the total ammount of time does not bother me, it´s more about having to stay there waiting to input the next command).
So, Is there a way to run this command for all files in the folder, telling it to use the same filename as source filename but adding "_48" at the end of it, before the .mp3 part and saving it in the source folder (same folder as original files).
Thanks in advance for any and all help.
Use a shell for loop to process the files in turn:
for i in *.mp3; do
lame --mp3input -b 48 "$i" "${i%%.mp3}_48.mp3"
done
Related
I'm using Mac Automator to run a shell script to execute a SoX command, and I'll ultimately save that workflow as a Service called "Combine". I'm virtually a complete beginner but I do at least know within Automator to set Shell: bin/bash and Pass inputs: as arguments.
I'm trying to be able to select multiple .wav files in the SAME folder using FINDER (though not necessarily ALL files in that folder, just the ones I have specifically selected), and have Automator instruct SoX to simply combine/concatenate those selected audio files sequentially into one output file called combined.wav. For example if I selected two audio files within a folder that were 2 minutes each, I'd end up with a 4 minute "combined.wav" file.
I have the script below as a starting point to use within Automator, but the problem with this language is, instead of actually combining multiple files I have selected within a folder, it just passes the last file through with the combined.wav name but nothing gets combined. Pretty sure the problem lies in the sox command itself, probably the "$f" is the mistake if I had to guess.
#!/bin/bash
for f in "$#"; do
combinedFolder="$(dirname "$f")/Combined"
fileName=$(basename "$f")
if [ ! -d "${combinedFolder}" ]; then
mkdir "${combinedFolder}"
fi
/Applications/SoX/sox "$f" "${combinedFolder}/combined.wav"
done
Ideally I'd love to have two versions of this (corrected) script... one putting the combined.wav output file in a "Combined" subfolder as the text above suggests, but also would love to have an alternate (simpler) script that simply puts the combined.wav in the same folder as the individual input files I've selected, with no subfolder.
I'm trying to zip a massive directory with images that will be fed into a deep learning system. This is incredibly time consuming, so I would like to stop prematurely the zipping proccess with Ctrl + C and zip the directory in different "batches".
Currently I'm using zip -r9v folder.zip folder, and I've seen that the option -u allows to update changed files and add new ones.
I'm worried about some file or the zip itself ending up corrupted if I terminate the process with Ctrl + C. From this answer I understand that the cp can be terminated safely, and this other answer suggests that gzip is also safe.
Putting it all together: Is it safe to end prematurely the zip command? Is the -u option viable for zipping in different batches?
Is it safe to end prematurely the zip command?
In my tests, canceling zip (Info-ZIP, 16 June 2008 (v3.0)) using CtrlC did not create a zip-archive at all, even when the already compressed data was 2.5GB. Therefore, I would say CtrlC is "safe" (you won't end up with a corrupted file, but also pointless (you did all the work for nothing).
Is the -u option viable for zipping in different batches?
Yes. Zip archives compress each file individually, so the archives you get from adding files later on are as good as adding all files in a single run. Just remember that starting zip takes time too. So set the batch size as high as acceptable to save time.
Here is a script that adds all your files to the zip archive, but gives a chance to stop the compression at every 100th file.
#! /bin/bash
batchsize=100
shopt -s globstar
files=(folder/**)
echo "Press enter to stop compression after this batch."
for ((startfile=0; startfile<"${#files[#]}"; startfile+=batchsize)); do
((startfile==0)) && u= || u=u
zip "-r9v$u" folder.zip "${files[#]:startfile:batchsize}"
u=u
if read -t 0; then
echo "Compression stopped before file $startfile."
echo "Re-run this script with startfile=$startfile to continue".
exit
fi
done
For more speed you might want to look into alternative zip implementations.
I have some video files in a directory which can't be opened. The problem is that some of them have been wrongly transcoded, so they can't be opened with QuickTime.
What I was wondering is if there is some kind of script I could write that would read through all the files in a directory and try to open them with QuickTime, and if they can't be opened, to move them or do something else.
My actual file directory would be something like this:
--Main folder
---Subfolder
-----video.mov
-----video.mov
------Sub-Subfolder
--------video.mov
--------video.mov
---Subfolder
-----video.mov
-----video.mov
------Sub-Subfolder
--------video.mov
(...) and so on
I hope I've explained it well so you can understand it... If someone could help me, I'd appreciate it so much.
Thanks!
The looping part is pretty easy, and should look like this :
for x in `find <folder> -name "*.mov"`; do <validate movie file command>; done
For the validate file command there's a suitable option in ffmpeg utility which is basically a video converter, but you can convert the input video to NULL and just read input file and report any errors that will appear.
ffmpeg -v error -i ${x} -f null - 2>error.log
I posted earlier needing help with a script to read a list of ".mp3" URLs from a text file ("URLs.txt"), download each file, rename in numerical order (1,2,3...), and then save to "URLs" folder on Desktop:
URLs.txt
http://...34566.mp3
http://...234.mp3
http://...126567.mp3
...becomes...
URLs Desktop folder
1.mp3
2.mp3
3.mp3
Shortly after, I kindly received the following response in Unix bash (for use in Automator):
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3
((n++))
done < ~/Desktop/URLs.txt
However, although the script runs fine, it will only download files up to the range of "47.mp3" - "49.mp3". The script doesn't stop, it just doesn't download anything beyond this range...
I'm very new to Unix bash, and excuse my ignorance, but is it possible that there's a "50 limit" on script or webpage?
I'm not sure how many URLs my text file has, but it's well over 49.
I've looked through the text file to ensure that all URLs paths are correct and all seems fine...
Also downloaded 47 - 52 manually to make sure that they're actually able to be downloaded — which they are.
No, there is no inherent shell script limit that you are hitting.
Is it possible that the web server you are downloading the MP3s from has a rate limiter which kicks in at 50 downloads in too short a time? If so you will need to slow down your script.
Try this modification and see what happens if you start at the 50th MP3:
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
((n >= 50)) && curl "$mp3" > ~/Desktop/URLs/$n.mp3
((n++))
done < ~/Desktop/URLs.txt
If you want to slow it down add a sleep call to the loop.
I am using PVRTexTool to convert png files to pvr files but the tool seems to only be able to run on one file at a time(wont accept *.png as file name).
does anyone know how to run it on a group of files at once?
Its really a hassle to run it on all of my textures.
In a shell, run
for file in *.png ; do
PVRTexToll $file
done
(I don't know how to call PVRTeXTool from a command line, so please substitute the second line with a correct version)
This is a general way to feed each file to a command which only accepts one file at a time. See any introduction on shell scripting, e.g. this discussion of the for loop.