Reduce image sequence frames - bash

So after exporting a video to a image sequence, I have ended up with way more images than I needed. I would like to trim this down. The images are named 1.png, up to 959.png. Is there a convenient way of doing this with a bash/zsh script? Something like removing every other image and renaming the next to keep the order?
Thanks in advance.

Alright so I found somewhat of a solution.
#!/bin/zsh
c=0
cc=0
ext=".png"
for file in `ls | sort -V`
do
let c=c+1;
let cc=cc+1;
if [ $c -eq 2 ]; then
rm -f $file
c=0
else
let cc=cc-1;
new="$cc$ext"
mv $file $new
fi
done
This will list out all the files in the current directory and loop through them, deleting every other and renaming the next. Just be aware that this will rename the script file too, so you might want to create some logic to avoid that.

Related

Use inotifywait to change filename and further loop through sql loader

Objective: The moment multiple.csv files are uploaded to the folder, code should check each filename, if appropriate filename, file should be further used by sqlloader to get data uploaded in the database. Once file is uploaded, code should delete the file processed. Next time, same process repeats.
I have some parts of the code working but some are creating problem, especially related to inotifywait. Please help.
In first loop, I am trying to monitor the /uploads folder, the moment it finds the .csv file, it checks if the filename has space. If yes, it wants to change the space to underscore in the filename. I have been trying to find a way to find "space, () or ," in the filename but only could do the 'space' part change. This is giving me an error that file cannot be moved, no such file or directory.
Second loop works separately but not when incorporated with first loop as there are errors which I have not been able to debug. If I run second loop separately, it is working correctly. But if there is a way to optimize the code better in one loop, I would be happy to know. Thanks!
Example: folder name: /../../upload
filenames: abc_123.csv (code should not make any change) , pqr(12 Apr).csv (code should change it to pqr_12_Apr.csv), May 12.csv (code should change it to May_12.csv) etc.
Once these 3 files have proper naming, it should be ready to be uploaded through sql loader and once files are processed, they get deleted.
My code is:
#!bin/bash
inotifywait -mqe create /../../upload | while read file; do
if [[ $file = '* *'.csv]]; then
mv "$file" ${file// /_}
fi
done
for file in /../..upload/*.csv
do
sqlcommand="sqlldr user/pwd control="/../xxx.ctl" data=$file silent=feedback, header"
$sqlcommand
rm $file
done
Thank you!
I have modified your script to this,
#!/usr/bin/env bash
while IFS= read -r file; do
filename=${file#* CREATE }
pathname=${file%/*}
if [[ $pathname/$filename = *\ *.csv ]]; then
echo mv -v "$pathname/$filename" "$pathname/${filename// /_}"
fi
done < <(inotifywait -mqe create /../../upload)
Remove the echo if you think the output is correct.
I just don't know how you can integrate the other parts of your script with that, probably create a separate script or remove the -m (which you don't want to do most probably). Well you could use a named pipe if mkfifo is available.
EDIT: as per OP's message add another parameter expansion for another string removal.
Add the code below the if [[ ... ]]; then
newfilename=${filename//\(\)}
Then change "${filename// /_}" to "${newfilename// /_}"

what is the best way to use two extension file into loop for a job in bash script?

how could you deal with two files at a time with different extension. More importantly, one such as file.fa is affiliated with file.qv and need to process at the same time.
Issue: suppose you dont know what is the name of file (basename) and number of files (might be fewer to hundreds ), just we know file extension of both files.
I tried with:
for i in `ls -1 -v $input_reads_dir/reads/*.csfasta $input_reads_dir/reads/*.qual` ;do
job -o /out{i} {i}.fa {i}.qv
done
Problems:
Its does not work
I have fear that it could be mislead with file1.csfasta with file2.qual which is not correct or I should not be fear.
Check to see if the corresponding file exists first.
for f in *.csfasta
do
if [ -f "${f%.csfasta}.qual" ]
then
dosomething "$f" "${f%.csfasta}.qual"
fi
done

Bash + gnuplot script for many files in folder

Here is a problem I am facing since few days. I want to shortcut a lot of work by doing simple script.. but script is not working properly.
The script should do:
Tail 3 lines of files in specified directories ${FOLDER}
Change extenstion from .gplt to none.
Use gnuplot function to plot an output.
All files in those folders begins with :
set term postscript color
set output "x_101.ps"
plot "-" title "magU" with lines
0 0
5.00501e-06 0.00301606
1.001e-05 0.00603211
...
So I am stuck with this, and some parts are not working and thats why I am asking you guys if someone could look on this:
#!/bin/bash
rename(){
newname = $(basename .gplt)
}
FOLDER=(
~/Dokumenty/mgr/obliczenia_OF/ReConst/H20_ReConst_v1/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/H20_ReConst_v2/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/H20_ReConst_v3/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/H20_ReConst_v4/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/R134_ReConst_v1/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/R134_ReConst_v2/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/R134_ReConst_v3/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/R134_ReConst_v4/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/OM_ReConst_v1/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/OM_ReConst_v2/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/OM_ReConst_v3/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/ReConst/OM_ReConst_v4/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/R134_PecletConst_v1/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/R134_PecletConst_v2/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/R134_PecletConst_v3/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/R134_PecletConst_v4/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/OM_PecletConst_v1/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/OM_PecletConst_v2/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/OM_PecletConst_v3/postProcessing/sets/*
~/Dokumenty/mgr/obliczenia_OF/PeConst/OM_PecletConst_v4/postProcessing/sets/*
)
for file in *; do
tail -n+3 ${file} >> ${file}
done
for ff in *; do
rename ${ff}
done
for f in *; do
gnuplot <<- EOF
set terminal png size 400,250
set output '${f}.png'
set grid
set xlabel 'y' rotate by 360
set ylabel 'U(y)'
plot "${f}" using 2:1 with lines
EOF
done
PS. There is one more thing. The FOLDERS have sub-folder that why I used this:
sets/*
at the end and I am worried it might be wrong.
Cheers
jilsu.
You aren't using FOLDER anywhere. You keep using * in your loops instead. You want to use "${FOLDER[#]}" in your loops.
Your rename function is syntactically invalid. Shell assignment lines require no spaces around the =. So it would need to be newname=$(basename .gplt) but that is just assigning a variable and not actually renaming any files.
You also likely don't need that rename function if all you want is to change file.gplt to file.png in the output gnuplot call. You can, instead, just use $(basename "$f" .gplt) in the HEREDOC.
there seem to be a couple of problems:
The approach with * at the end will not work, use find instead.
find ${FOLDER[i]} -type f
i am not sure what you want to achieve with that one:
tail -n+3 ${file} >> ${file}
what it DOES is duplicating the content of $file starting from line 3 (you are appending to the file you read from).

Cannot change the names of files that are the result of a for loop that echos file names

I've been successfully running a script that prints out the names of the files in a specific directory by using
for f in data/*
do echo $f
and when I run the program it gives me:
data/data-1.txt
data/data-2.txt
data/data-3.txt (the files in the data directory)
however, when I need to change all of the file names from data-*.txt to mydata-*txt, I can't figure it out.
I keep trying to use sed s/data/mydata/g $f but it prints out the whole file instead and doesn't change the name correctly. Can anybody give me some tips on how to change the file names? it seems to also change the name of the directory if I use SED, so I'm kind of a dead end. Even using mv doesn't seem to do anything.
for f in data/*
do
NewName="$( echo "${f}" | sed 's#/data-\([0-9]*.txt\)$#mydata\1#' )"
if [ ! "${f}" = "${NewName}" ]
then
mv ${f} ${NewName}
fi
done
based on your code but lot of other way to do it (ex: find -exec)

Handling spaces with 'cp'

I've got an external drive with over 1TB of project files on it. I need to reformat this drive so I can reorganize it, however before I do that I need to transfer everything. The issue is I'm on a Mac and the drive is formatted as NTFS so all I can do is copy from it. I have tried to simply just copy and paste in Finder but the drive seems to lock up after roughly 15 min of copying that way. So I decided to write a bash script to iterate through the all 1000+ files one at a time. This seems to work for files that are without spaces but skips when it hits one.
Here is what I've hacked together so far.. I'm not too advanced in bash so any suggestions would be great on how you handle the spaces.
quota=800
size=`du -sg /Users/work/Desktop/TEMP`
files="/Volumes/Lacie/EXR_files/*"
for file in $files
do
if [[ ${size%%$'\t'*} -lt $quota ]];
then
echo still under quota;
cp -v $file /Users/work/Desktop/TEMP_EXR;
du -sg /Users/work/Desktop/TEMP_EXR;
else
echo over quota;
fi
done
(I'm checking for directory size because I'm having to split this temporary copy onto a few different place before I copy it all back onto the one reformatted drive.)
Hope I'm not misunderstanding. If you have problem with space character in filename, quote it. If you want bash to expand parameters inside it, use double quote.
cp -v "$file" /Users/work/Desktop/TEMP_EXR
You can put all the file names in an array, then iterate over that.
quota=800
size=`du -sg /Users/work/Desktop/TEMP`
files=( /Volumes/Lacie/EXR_files/* )
for file in "${files[#]}"
do
if [[ ${size%%$'\t'*} -lt $quota ]];
then
echo still under quota;
cp -v "$file" /Users/work/Desktop/TEMP_EXR;
du -sg /Users/work/Desktop/TEMP_EXR;
else
echo over quota;
fi
done
The two things to note are 1) quoting the array expansion in the for list, and 2) quoting $file for the cp command.

Resources