I'm here to know how to split a file name that was found by a ls comand :D. Let me explain better...
I have a variable
images=$( ls | grep .img )
And then i want to split the result of the search, because i just want the name before the .img, so a nice idea is use IFS.
IFS=. read -r disk <<< $image
Pretty nice, but when a do an echo with the $disk variable, what i see is a ".img" just that, i want to recover where is before that dot.
Thank you all, and sorry for any mistake :)
Use the stream editor sed! Example:
echo "myImage.jpg" | sed 's/.jpg//'
That s means "substitute", and you substitute the part between the first two slashes for the part between the second and third slash, so in the example, ".jpg" for the empty string.
That's all!
Since you mention using <<<, I'll assume you are using a shell that supports arrays (specifically, bash. Other shells--notably zsh--may use different syntax for what I am about to describe).
images=( *.img ) # No need to parse ls, which is generally a bad idea
Assuming there is only one matching file, you can then use
disk=${images%.img}
to strip .img from the file name and save the remaining portion in disk. If there could be multiple matches, you can apply the extension stripping to each element of the array and store the result in a second array.
disks=( "${images[#]%.img}" )
basename is what you want.
From the man page:
basename - strip directory and suffix from filenames
EXAMPLES
basename /usr/bin/sort
Output "sort".
basename include/stdio.h .h
Output "stdio".
Related
I have a folder which contains hundreds of files. Also, another file which contains the name of all files in the directory as follows:
>myfile.txt
11j
33t
dsvd33
343
im#3
I would like to write bash script such that, it goes each line of myfile.txt, and select the file name (file id) in each iteration and path it to my CrunchMe.
More specifically:
#!/bin/bash
for ID in myfile.txt:
# do this
CrunchMe ID
end
Can anyone help me with it?
#saterHater is almost right. In his solution you have to set the internal field separator (IFS) when you're working if your filenames contain spaces. Instead of his solution, I personally would put the filenames into a for loop. Note that the for loop will have some memory overhead, but reading about CrunchMe I think that doesn't matter really.
#! /bin/bash
IFS=$'\n'
for ID in $(cat myfile.txt); do
CrunchMe $ID
done
Otherwise, the link #chepner could be a solution. The following code uses less memory and more CPU resources, because you read the file line-by-line.
#! /bin/bash
while IFS='' read -r ID; do
CrunchMe $ID
done < myfile.txt
Both scripts have the same output. In my test case that's:
1.txt
a 1.txt
lala .txt
I think you want to take a look at child processes when you're changing a lot of files. In that case you should consider the disk overhead, but that's off topic for this thread ;-)
I'm a newbie answer-er, so my apologies if this isn't helpful.
The two things you need to know are the subshells $() and the variable prefix $:
cat $(cat myfile.txt) | while read ID ; do
mv $ID CruncMe
done
I am assuming you'll run this in the local directory where both your myfile.txt and ALL the other files are in.
Otherwise you can run a find subshell to locate the correct absolute path name of the $ID file.
I'm writing a small piece of code that checks for .mov files in a specific folder over 4gb and writes it to a log.txt file by name (without an extension). I'm then reading the names into a while loop line by line which signals some archiving and copying commands.
Consider a file named abcdefg.mov (new) and a corresponding folder somewhere else named abcdefg_20180525 (<-*underscore timestamp) that also contains a file named abcedfg.mov (old).
When reading in the filename from the log.txt, I strip the extension to store the variable "abcdefg" ($in1) and i'm using that variable to locate a folder elsewhere that contains that matching string at the beginning.
My problem is with how the mv command seems to support a wild card in the "source" string, but not in the "destination" string.
For example i can write;
mv -f /Volumes/Myshare/SourceVideo/$in1*/$in1.mov /Volumes/Myshare/Archive
However a wildcard on the destination doesn't work in the same way. For example;
mv -f /Volumes/Myshare/Processed/$in1.mov Volumes/Myshare/SourceVideo/$in1*/$in1.mov
Is there an easy fix here that doesn't involve using another method?
Cheers for any help.
mv accepts a single destination path. Suppose that $in1 is abcdfg, and that $in1* expands to abcdefg_20180525 and abcdefg_20180526. Then the command
mv -f /dir1/$in1 /dir2/$in1*/$in1.mov
will be equivalent to:
mv -f /dir1/abcdefg.mov /dir2/abcdefg_20180526/abcdefg.mov
mv -f /dir1/abcdefg.mov /dir2/abcdefg_20180526/abcdefg.mov
mv -f /dir2/abcdefg_20180525/abcdefg.mov /dir2/abcdefg_20180526/abcdefg.mov
Moreover, because the destination file is the same in all three cases, the first two files will be overwritten by the third.
You should create a precise list and do a precise copy instead of using wild cards.
This is what I would probably do, generate a list of results in a file with FULL path information, then read those results in another function. I could have used arrays but I wanted to keep it simple. At the bottom of this script is a function call to scan for files of EXT mp4 (case insensitive) then writes the results to a file in tmp. then the script reads the results from that file in another function and performs some operation (mv etc.). Note, if functions are confusing , you can just remove the function name { } and name calls and it becomes a normal script again. functions are really handy, learn to love them!
#!/usr/bin/env bash
readonly SIZE_CHECK_LIMIT_MB="10M"
readonly FOLDER="/tmp"
readonly DESTINATION_FOLDER="/tmp/archive"
readonly SAVE_LIST_FILE="/tmp/$(basename $0)-save-list.txt"
readonly EXT="mp4"
readonly CASE="-iname" #change to -name for exact ext type upper/lower
function find_files_too_large() {
> ${SAVE_LIST_FILE}
find "${FOLDER}" -maxdepth 1 -type f "${CASE}" "*.${EXT}" -size +${SIZE_CHECK_LIMIT_MB} -print0 | while IFS= read -r -d $'\0' line ; do
echo "FOUND => $line"
echo "$line" >> ${SAVE_LIST_FILE}
done
}
function archive_large_files() {
local read_file="${SAVE_LIST_FILE}"
local write_folder="$DESTINATION_FOLDER"
if [ ! -s "${read_file}" ] || [ ! -f "${read_file}" ] ;then
echo "No work to be done ... "
return
fi
while IFS= read -r line ;do
echo "mv $line $write_folder" ;sleep 1
done < "${read_file}"
}
# MAIN (this is where the script starts) We just call two functions.
find_files_too_large
archive_large_files
it might be easier, i think, to change the filenames to the folder name initially. So abcdefg.mov would be abcdefg_timestamp.mov. I can always strip the timestamp from the filename easy enough after its copied to the right location. I was hoping i had a small syntax issue but i think there is no easy way of doing what i thought i could...
I think you have a basic misunderstanding of how wildcards work here. The mv command doesn't support wildcards at all; the shell expands all wildcards into lists of matching files before they get passed to the mv command as wildcards. Furthermore, the mv command doesn't know if the list of arguments it got came from wildcards or not, and the shell doesn't know anything about what the command is going to do with them. For instance, if you run the command grep *, the grep command just gets a list of names of files in the current directory as arguments, and will treat the first of them as a regex pattern ('cause that's what the first argument to grep is) to search the rest of the files for. If you ran mv * (note: don't do this!), it will interpret all but the last filename as sources, and the last one as a destination.
I think there's another source of confusion as well: when the shell expands a string containing a wildcard, it tries to match the entire thing to existing files and/or directories. So when you use Volumes/Myshare/SourceVideo/$in1*/$in1.mov, it looks for an already-existing file in a matching directory; AIUI the file isn't there yet, there's no match. What it does in that case is pass the raw (unexpanded) wildcard-containing string to mv as an argument, which looks for that exact name, doesn't find it, and gives you an error.
(BTW, should there be a "/" at the front of that pattern? I assume so below.)
If I understand the situation correctly, you might be able to use this:
mv -f /Volumes/Myshare/Processed/$in1.mov /Volumes/Myshare/SourceVideo/$in1*/
Since the filename isn't supplied in the second string, it doesn't look for existing files by that name, just directories with the right prefix; mv will automatically retain the filename from the source.
However, I'll echo #Sergio's warning about chaos from multiple matches. In this case, it won't overwrite files (well, it might, but for other reasons), but if it gets multiple matching target directories it'll move all but the last one into the last one (along with the file you meant to move). You say you're 100% certain this won't be a problem, but in my experience that means that there's at least a 50% chance that something you'd never have thought of will go ahead and make it happen anyway. For instance, is it possible that $in1 could wind up empty, or contain a space, or...?
Speaking of spaces, I'd also recommend double-quoting all variable references. You want the variables inside double-quotes, but the wildcards outside them (or they won't be expanded), like this:
mv -f "/Volumes/Myshare/Processed/$in1.mov" "/Volumes/Myshare/SourceVideo/$in1"*/
Hi i want to write a script that will go to a directory with many files and search a filename e.g. test_HTTP_abc.txt for this and search for HTTP string pattern, if it contains this string then set a variable equal to something:
something like:
var1=0
search for 06
if it contains 06 then
var1=1
else
var1=0
end if
but in unix script . Thanks
Probably the simplest thing is:
if test "${filename#*HTTP}" = "$filename"; then
# the variable does not contain the string `HTTP`
var=0
else
var=1
fi
Some shells allow regex matches in [[ comparisons, but it's not necessary to introduce that sort of non-portable code into your script.
Like this?
var=0
if fgrep -q 06 /path/to/dir/*HTTP*
then
var=1
fi
fgrep will return 0 ("truth") if there is a match in one of the files, and non-true otherwise (including the case of no matching input files).
If you want a list of matching files, try fgrep -l.
Well, I'm not going to write the script for you, you have to learn :)
Its easy if you break it down into smaller tasks;
The ls command is for looking at a directorie's contents. You can also use the find command to be a bit more intuitive, like find /some/folder -name "*string*"
To sift through the output of a command. You could store the output of a command to a variable or at using pipes.
You can search this output with something like awk (link), grep (link) an so on.
Setting variables is easy also in bash; http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-5.html
foundit=1
Why don't you have a go at trying to solve this puzzle first rather than someone telling you :D Show us where you get stuck in the puzzle.
I want to iterate over a list of files in Bash and perform some action. The problem: the file names may contain whitespace, which creates an obvious problem with wildcards or ls:
touch a\ b
FILES=* # or $(ls)
for FILE in $FILES; do echo $FILE; done
yields
a
b
Now, the conventional way to handle this is to use find … -print0 instead. However, this only works (well) in conjunction with xargs -0, not with Bash variables / loops.
My idea was to set $IFS to the null character to make this work. However, the comp.unix.shell seems to think that this is impossible in bash.
Bummer. Well, it’s theoretically possible to use another character, such as : (after all, $PATH uses this format, too):
IFS=$':'
FILES=$(find . -print0 | xargs -0 printf "%s:")
for FILE in $FILES; do echo $FILE; done
(The output is slightly different but fair enough.)
However, I can’t help but feel that this is clumsy and that there should be a more direct way of doing this. I’m looking for a more direct way of accomplishing this, preferably using wildcards or ls.
The best way to handle this is to store the file list as an array, rather than a string (and be sure to double-quote all variable substitutions):
files=(*)
for file in "${files[#]}"; do
echo "$file"
done
If you want to generate an array from find's output (e.g. if you need to search recursively), see this previous answer.
Exactly what you have in the first example works fine for me in Msys Bash, Cygwin and on my Fedora box:
FILES=*
for FILE in $FILES
do
echo $FILE
done
Its very important to preceed
IFS=""
otherwise files with two directly following spaces will not be found
I don't really know that much about bash scripts OR imagemagick, but I am attempting to create a script in which you can give some sort of regexp matching pattern for a list of images and then process those into new files that have a given filename prefix.
for example given the following dir listing:
allfiles01.jpg allfiles02.jpg
allfiles03.jpg
i would like to call the script like so:
./resisemany.sh allfiles*.jpg 30 newnames*.jpg
the end result of this would be that you get a bunch of new files with newnames, the numbers match up,
so far what i have is:
IMAGELIST=$1
RESIEZFACTOR=$2
NUMIMGS=length($IMAGELIST)
for(i=0; i<NUMIMGS; i++)
convert $IMAGELIST[i] -filter bessel -resize . RESIZEFACTOR . % myfile.JPG
Thanks for any help...
The parts that I obviously need help with are
1. how to give a bash script matching criteria that it understands
2. how to use the $2 without having it match the 2nd item in the image list
3. how to get the length of the image list
4. how to create a proper for loop in such a case
5. how to do proper text replacement for a shell command whereby you are appending items as i allude to.
jml
Probably the way a standard program would work would be to take an "in" filename pattern and an "out" filename pattern and perform the operation on each file in the current directory that matches the "in" pattern, substituting appropriate parts into the "out" pattern. This is pretty easy if you have a hard-coded pattern, when you can write one-off commands like
for infile in *.jpg; do convert $infile -filter bessel -resize 30% ${infile//allfiles/newnames}; done
In order to make a script that will do this with any pattern, though, you need something more complicated because your filename transformation might be something more complicated than just replacing one part with another. Unfortunately Bash doesn't really give you a way to identify what part of the filename matched a specific part of the pattern, so you'd have to use a more capable regular expression engine, like sed for example:
#!/bin/bash
inpattern=$1
factor=$2
outpattern=$3
for infile in *; do
outfile=$(echo $infile | sed -n "s/$inpattern/$outpattern/p")
test -z $outfile && continue
convert $infile -filter bessel -resize $factor% $outfile
done
That could be invoked as
./resizemany.sh 'allfiles\(.*\).jpg' 30 'newnames\1.jpg'
(note the single quotes!) and it would resize allfiles1.jpg to newnames1.jpg, etc. But then you'd wind up basically having to learn sed's regular expression syntax to specify your in and out patterns. (It's not that bad, really)
You could eliminate the regex problem if you make a folder of all the files to be processed, and then run something like:
for img in `ls *.jpg`
do
convert $img -filter bessel -resize 30% processed-$img
done
Then, if you need to rename them all later, you could do something like:
ls | nl -nrz -w2 | while read a b; do mv "$b" newfilename.$a.jpg; done;
Also, If you are doing a batch process of the same operation, you might see if using mogrify might help (imagemagik's method for converting multiple files). Like the above example, it's always good to make a copy of the folder, and then run any processing so you don't destroy your original files.
Your script should be called using a syntax such as:
./resizemany.sh -r 30 -n newnames -o allfiles allfiles*.jpg
and use getopts to process the options. What you may not be aware of is that the shell expands the file glob before the script gets it so the way you had your arguments your script would never be able to distinguish the filenames from the other parameters.
Output files will be named using the rename script often found on systems with Perl installed. A file named "allfiles03.jpg" will be output as "newname03.jpg".
#!/bin/bash
options=":r:n:o:"
while getopts $options option
do
case $option in
n)
newnamepattern=$OPTARG
;;
o)
oldnamepattern=$OPTARG
;;
r)
resizefacor=$OPTARG
;;
\?)
echo "Invalid option"
exit 1
esac
done
# a check to see if any options are missing should be performed (not implemented)
shift $((OPTIND - 1))
# now all that's left will be treated as filenames
for file
do
convert (input options) "$file" -resize $resizefactor (output options) "${file}.out"
rename "s/$old/$new/;s/\.out$//" "${file}.out"
done
This is untested (obviously since most of the arguments to convert are missing).
Parameter validation such as range checks, missing required options and others are left as exercises for further development. Also absent are checks for successful completion of one step before continuing to the next one. Also issues such as locations of files and name collisions and others are not addressed.