Replace all images in folder with another image but keep filename - bash

Is there a way in OSX terminal to replace all images in a folder (they all have incremental filenames) with another image, but to maintain the original file name?
I've been googling for a long time but have not found anything useful.
For example, the following files need replacing:
1.jpg
239.jpg
213.jpg
5678.jpg
I need the file names to stay the same but say default.jpg to overwrite the image and then be renamed to match the image it had just replaced. So in theory:
n = file
x = filename
default = c://download/default.jpg
folder = c://downloads
For each n in folder
x = get filename
copy default to folder
delete n
rename default to x
Loop
So it will programmatically replace all of the files with the new image?
I really do not know where to start?

Maybe :
for i in *.jpg
do
cp default.jpg "$i"
done
That will issue an error once (when processing default.jpg) but should do the job.

Ok, So I tweaked the code a bit and got it to work without an error. for example:
for i in /Users/User/downloads/listings/m/folder4/newfolder/*.jpg; do cp -R /Users/User/downloads/listings/m/folder4/newfolder/default.jpg "$i"; done
Just in case anyone else needs to do this. Thanks for your comments and pointing me in the right direction.
Regards
D

Related

How to rename photos using information from csv file

I am new to unix and could really use your help.
I want to rename a lot of photographs so they correspond with codes of items that are on the picture. I have a .csv file that has the original .jpg name and then the codes I want the photos to be renamed to, following in consecutive columns. For example:
IMG_1234.JPG,AB001,AB003,AB004
IMG_1345.JPG,AB011,AB012,AB013,AB014,AB015
IMG_1456.JPG,AB112
IMG_1678.JPG,AB125,AB126
So I want IMG_1234.JPG copied 3 times and renamed to AB001, AB003, and AB004 etc.
I know I need a script and that I can copy and rename files, but I can't figure out how to make a script run through the csv file and copy & rename the .jpg to the names following until an empty cell and then move on to the next row and copy & rename that .jpg etc etc.
I hope my question is clear and I apologize for my limited knowledge.
Thanks in advance!
edit: The image names have directories (with spaces) in front of them as the photographs are in different folders. For example:
./Photos sorted/Samples1-100/IMG_1134.JPG
This should do what you want. The filename of the csv file is given as parameter to the script. You might adjust the paths inside the copy command, currently everything must be in the same directory. If you are using this on a mac or linux, you can also use "ln -s" instead if "cp" to create a symbolic link to the original file to save disk space.
CSVFILE=$1
cat $CSVFILE |\
while read LINE; do
SPLIT=`echo $LINE | tr "," " "`
FIRST=0
for NAME in $SPLIT; do
if [ $FIRST -eq 0 ]; then
SRCNAME=$NAME
else
DSTNAME=$NAME
cp ${SRCNAME}.jpg ${DSTNAME}.jpg
fi
((FIRST++))
done
done

How to set automator (mac) to create folders and fill them with x number of files?

I have seen several variations of this question asked however none of witch fully answer my problem. I have several folders that contain between 2,000 and 150,000 image files. Searching these directories becomes very inefficient as the number of files increases as speed is drastically decreased.
What I want to do is use automator to:
1. select folder
2. create subfolder
3. fill newly created subfolder with the first 1000 files in the folder selected in (1.)
4. if more files exist in the outer folder, create another subfolder and fill with the next 1000 files etc.
Is this possible? Any help is much apreciated.
Thank you,
Brenden
This takes a directory and moves the contents to new folders called "newFolder1", "newFolder2" etc.
Have you used Terminal much? Let me know if you need more instruction. I also haven't put in any checks, so let me know if you get any errors.
o Save this file to your desktop (as script.sh for the purpose of tutorial)
#!/bin/bash
cd "$1" #Change directory to the folder to sort
filesPerFolder=$2 #This is how many files will be in each folder
currentDir='newFolder1';
currentFileCount=0;
currentDirCount=1;
mkdir $currentDir;
for file in *
do
if [ -f "$file" ]
then
mv "$file" "$currentDir";
fi
currentFileCount=$(($currentFileCount + 1));
if [ $(($currentFileCount % $filesPerFolder)) -eq "0" ] #Every X files, make a new folder
then
currentDirCount=$(($currentDirCount + 1));
currentDir='newFolder'$currentDirCount;
mkdir "$currentDir";
fi
done
o Open Terminal and type cd ~/Desktop/
o Type chmod 777 script.sh to change the permissions on the file
o Type ./script.sh "/path/to/folder/you/want/to/sort" 30
o The 30 here is how many files you want in each folder.

How to read images from folders in matlab

I have six folders like this >> Images
and each folder contains some images. I know how to read images in matlab BUT my question is how I can traverse through these folders and read images in abc.m file (this file is shown in this image)
So basically you want to read images in different folders without putting all of the images into one folder and using imread()? Because you could just copy all of the images (and name them in a way that lets you know which folder they came from) into a your MATLAB working directory and then load them that way.
Use the cd command to change directories (like in *nix) and then load/read the images as you traverse through each folder. You might need absolute path names.
The easiest way is certainly a right clic on the forlder in matlab and "Add to Path" >> "Selected Folders and Subfolders"
Then you can just get images with imread without specifying the path.
if you know the path to the image containing directory, you can use dir on it to list all the files (and directories) in it. Filter the files with the image extension you want and voila, you have an array with all the images in the directory you specified:
dirname = 'images';
ext = '.jpg';
sDir= dir( fullfile(dirname ,['*' ext]) );;
sDir([sDir.isdir])=[]; % remove directories
% following is obsolete because wildcarded dir ^^
b=arrayfun(#(x) strcmpi(x.name(end-length(ext)+1:end),ext),sDir); % filter on extension
sFiles = sDir(b);
You probably want to prefix the name of each file with the directory before using:
sFileName(ii) = fullfile(dirname, sFiles(ii));
You can process this resulting files as you want. Loading all the files for example:
for ii=1:numel(sFiles)
data{i}=imread(sFiles(ii).name)
end
If you also want to recurse the subdirectories, I suggest you take a look at:
How to get all files under a specific directory in MATLAB?
or other solutions on the FEX:
http://www.mathworks.com/matlabcentral/fileexchange/8682-dirr-find-files-recursively-filtering-name-date-or-bytes
http://www.mathworks.com/matlabcentral/fileexchange/15505-recursive-dir
EDIT: added Amro's suggestion of wildcarding the dir call

Concatenating images from a folder

I have a series of images saved in a folder, and I have written a short program to open two of these image files, concatenate them (preferably vertically, although for now I am trying horizontally), then save this new image to the same folder. This is what I have written so far:
function concatentateImages
%this is the folder where the original images are located path='/home/packremote/SharedDocuments/Amina/zEXAMPLE/';
file1 = strcat(cr45e__ch_21', '.pdf');
[image1,map1] = imread(graph1);
file2 = strcat('cr45f__ch_24', '.jpg');
[image2,map2] = imread(graph2);
image1 = ind2rgb(image1,map1);
image2 = ind2rgb(image2,map2);
image3 = cat(2,image1,image2);
%this is the directory where I want to save the new images
dircase=('/home/packremote/SharedDocuments/Amina/zEXAMPLE/');
nombrejpg=strcat(dircase, 'test', jpgext)
saveas(f, nombrejpg, 'jpg')
fclose('all');
However, I keep getting an error that my files do not exist, though I am certain the names are copied correctly.
I am currently using jpg files, but the format can be easily converted.
Any input on how to fix this error, or a nicer way of preforming this task is greatly appreciated!
Cheers,
Amina
Replace
[image1,map1] = imread(graph1);
and
[image2,map2] = imread(graph2);
by
[image1,map1] = imread(file1);
and
[image2,map2] = imread(file2);
Also check that you are in the right working directory.
In addition to the answer by #Simon, you also need to change
file1 = strcat(cr45e__ch_21', '.pdf');
to
file1 = strcat('cr45e__ch_21', '.pdf');
I.e. you forgot a '. Also your function doesn't seem to include a definition of jpgext. I expect you want a line like
jpgext = '.jpg';
Lastly, mostly a coding practice issue, but you might want to switch to using fullfile to build your full file path.
Also, instead of worrying about being in the correct working directory, if you use full paths you save yourself from having to keep track of what directory you're in.
SO I would suggest:
dir1 ='/home/packremote/SharedDocuments/Amina/zEXAMPLE/';
file1 = fullfile(dir1, 'cr45e__ch_21.pdf');
etc

BASH script to copy files based on date, with a catch

Let me explain the tree structure: I have a network directory where several times a day new .txt files are copied by our database. Those files sit on directory based on usernames. On the local disk I have the same structure (directory based on usernames) and need to be updated with the latest .txt files. It's not a sync procedure: I copy the remote file to a local destination and I don't care what happens with it after that, so I don't need to keep it in sync. However I do need to copy ONLY the new files and not those that I already copied. It would look something like:
Remote disk
/mnt/remote/database
+ user1/
+ user2/
+ user3/
+ user4/
Local disk
/var/database
+ user1/
+ user2/
+ user3/
+ user4/
I played with
find /mnt/remote/database/ -type f -mtime +1
and other variants, but it's not working very well.
So, the script i am trying to figure is the following:
1- check /mnt/remote/database recursively for *.txt
2- check the files date to see if they are new (since the last time I checked, maybe maintain a text file with the last time checked on it as a reference?)
3- if the file is new, copy it to the proper destination in /var/database (so /mnt/remote/database/user1/somefile.txt will be copied to /var/database/user1/)
I'll run the script through a cron job.
I'm doing this in C right now, but the IT people are not very good in debugging or writing C and if they need to add or fix something they can handle bash scripts better, which I am not very good at.
Any ideas out there?
thank you!
you could consider using local rsync between the input & output directories. it has all the options you want to make its sync policy very flexible.
find /mnt/remote/database/ -type f -newer $TIMESTAMP_FILE | xargs $CP_COMMAND
touch $TIMESTAMP_FILE
The solution is here:
http://www.movingtofreedom.org/2007/04/15/bash-shell-script-copy-only-files-modifed-after-specified-date/

Resources