I have six folders like this >> Images
and each folder contains some images. I know how to read images in matlab BUT my question is how I can traverse through these folders and read images in abc.m file (this file is shown in this image)
So basically you want to read images in different folders without putting all of the images into one folder and using imread()? Because you could just copy all of the images (and name them in a way that lets you know which folder they came from) into a your MATLAB working directory and then load them that way.
Use the cd command to change directories (like in *nix) and then load/read the images as you traverse through each folder. You might need absolute path names.
The easiest way is certainly a right clic on the forlder in matlab and "Add to Path" >> "Selected Folders and Subfolders"
Then you can just get images with imread without specifying the path.
if you know the path to the image containing directory, you can use dir on it to list all the files (and directories) in it. Filter the files with the image extension you want and voila, you have an array with all the images in the directory you specified:
dirname = 'images';
ext = '.jpg';
sDir= dir( fullfile(dirname ,['*' ext]) );;
sDir([sDir.isdir])=[]; % remove directories
% following is obsolete because wildcarded dir ^^
b=arrayfun(#(x) strcmpi(x.name(end-length(ext)+1:end),ext),sDir); % filter on extension
sFiles = sDir(b);
You probably want to prefix the name of each file with the directory before using:
sFileName(ii) = fullfile(dirname, sFiles(ii));
You can process this resulting files as you want. Loading all the files for example:
for ii=1:numel(sFiles)
data{i}=imread(sFiles(ii).name)
end
If you also want to recurse the subdirectories, I suggest you take a look at:
How to get all files under a specific directory in MATLAB?
or other solutions on the FEX:
http://www.mathworks.com/matlabcentral/fileexchange/8682-dirr-find-files-recursively-filtering-name-date-or-bytes
http://www.mathworks.com/matlabcentral/fileexchange/15505-recursive-dir
EDIT: added Amro's suggestion of wildcarding the dir call
Related
Is there a way in OSX terminal to replace all images in a folder (they all have incremental filenames) with another image, but to maintain the original file name?
I've been googling for a long time but have not found anything useful.
For example, the following files need replacing:
1.jpg
239.jpg
213.jpg
5678.jpg
I need the file names to stay the same but say default.jpg to overwrite the image and then be renamed to match the image it had just replaced. So in theory:
n = file
x = filename
default = c://download/default.jpg
folder = c://downloads
For each n in folder
x = get filename
copy default to folder
delete n
rename default to x
Loop
So it will programmatically replace all of the files with the new image?
I really do not know where to start?
Maybe :
for i in *.jpg
do
cp default.jpg "$i"
done
That will issue an error once (when processing default.jpg) but should do the job.
Ok, So I tweaked the code a bit and got it to work without an error. for example:
for i in /Users/User/downloads/listings/m/folder4/newfolder/*.jpg; do cp -R /Users/User/downloads/listings/m/folder4/newfolder/default.jpg "$i"; done
Just in case anyone else needs to do this. Thanks for your comments and pointing me in the right direction.
Regards
D
I am looking to merge PDF files from two separate folders into a third folder, based on file name.
Directory structure:
FOLDER_1 = File set #1.
FOLDER_2 = File set #2.
MERGED_PDFS = Output of merged files.
FOLDER_1 contains a set of PDF files which could be named with any combination of letters, numbers and allowed symbols.
FOLDER_2 contains a set of PDFs with the exact same names as FOLDER_1. The data on these sheets is different. The files from FOLDER_2 need to be inserted into the files from FOLDER_1, at the end of the file.
The output of this merged file will be placed in the MERGED_PDFs folder, retaining the name used to match the files in FOLDER_1 and FOLDER_2.
Example:
FOLDER_1: R000135322.PDF
FOLDER_2: R000135322.PDF
MERGED_PDFS: R000135322.PDF
(MERGED_PDFS contains a merged PDF from FOLDER_1 & FOLDER_2, with the PDF from FOLDER_2 being placed at the end of the PDF in FOLDER_1.
I saw some similar examples of this being done with PDFtk, but unsure how to edit to get my expected output.
Thanks
Here's what you need to do:
Install FolderMill
Specify the Incoming folder and the Output folder for FolderMill on your PC
Since you mention that files in FOLDER_1 and files in FOLDER_2 have the same filenames, just add "Convert to PDF" action and select Multipage: "Append pages to existing document" in the options.
Click Apply changes
Start FolderMill by pressing the Play button.
Grab the files from FOLDER_1 and put them into the Incoming folder
Grab the files from FOLDER_2 and do the same.
Receive the merged PDFs from the Output folder
If the you are not sure if all the corresponding files have the same filenames, you may also need to use the "Rename" action.
FYI, we have a detailed step-by-step guide how to do it (with screenshots).
You are welcome :)
Kind of easy question, but I can't find the answer. I want to extract the contents of multiple zipped folders into a single directory. I am using the bash console, which is the only tool available on the particular website I am using.
For example, I have two folders: a.zip (which contains a1.txt and a2.txt) and b.zip (which contains b1.txt and b2.txt). I want to get extract all four text files into a single directory.
I have tried
unzip \*.zip -d \newdirectory
But it creates two directories (a and b) with two text files in each.
I also tried concatenating the two zipped folders into one big folder and extracting it, but it still creates two directories, even when I specify a new directory.
I can't figure what I am doing wrong. Any help?
Thanks in advance!
Use the -j parameter to ignore any directory structure.
unzip -j -d /path/to/your/directory '*.zip*'
I 7Zip'd a multi-gig folder which contained many folders each with many files using the split to volumes (9Meg) option. 7Zip created files of type .zip.001,
.zip.002, etc. When I extract .001 it appears to work correctly but I get an 'unexpected end of data' error. 7Zip does not automatically go to .002. When I extract .002, it also gives the same error and it does not continue the original folder/file structure. Instead it extracts a zip file in the same folder as the previously extracted files. How do I properly extract split files to obtain the original folder/file structure? Thank you.
I would like to scan text of textfiles in Matlab with the textscan function. Before I can open the textfile with fid = fopen('C:\path'), I need to unzip the files first. The files have the extension: *.gz
There are thousands of files which I need to analyze and high performance is important.
I have two ideas:
(1) Use an external program an call it from the command line in Matlab
(2) Use a Matlab 'zip'toolbox. I have heard of gunzip, but don't know about its performance.
Does anyone knows a way to unzip these files as quick as possible from within Matlab?
Thanks!
You could always try the Matlab unzip() function:
unzip
Extract contents of zip file
Syntax
unzip(zipfilename)
unzip(zipfilename, outputdir)
unzip(url, ...)
filenames = unzip(...)
Description
unzip(zipfilename) extracts the archived contents of zipfilename into the current folder and sets the files' attributes, preserving the timestamps. It overwrites any existing files with the same names as those in the archive if the existing files' attributes and ownerships permit it. For example, files from rerunning unzip on the same zip filename do not overwrite any of those files that have a read-only attribute; instead, unzip issues a warning for such files.
Internally, this uses Java's zip library org.apache.tools.zip. If your zip archives each contain many text files it might be faster to drop down into Java and extract them entry by entry, without explicitly unzipped files. look at the source of unzip.m to get some ideas, and also the Java documentation.
I've found 7zip-commandline(Windows) / p7zip(Unix) to be somewhat speedier for this.
[edit]From some quick testing, it seems making a system call to gunzip is faster than using MATLAB's native gunzip. You could give that a try as well.
Just write a new function that imitates basic MATLAB gunzip functionality:
function [] = sunzip(fullfilename,output_dir)
if ~exist('output_dir','var'), output_dir = fileparts(fullfilename); end
app_path = '/usr/bin/7za';
switches = ' e'; %extract files ignoring directory structure
options = [' -o' output_dir];
system([app_path switches options '_' fullfilename]);
Then use it as you would use gunzip:
sunzip('/data/time_1000.out.gz',tmp_dir);
With MATLAB's toc timer, I get the following extraction times with 6 uncompressed 114MB ASCII files:
gunzip: 10.15s
sunzip: 7.84s
worked well, just needed a minor change to Max's syntax calling the executable.
system([app_path switches ' ' fullfilename options ]);