I am looking to merge PDF files from two separate folders into a third folder, based on file name.
Directory structure:
FOLDER_1 = File set #1.
FOLDER_2 = File set #2.
MERGED_PDFS = Output of merged files.
FOLDER_1 contains a set of PDF files which could be named with any combination of letters, numbers and allowed symbols.
FOLDER_2 contains a set of PDFs with the exact same names as FOLDER_1. The data on these sheets is different. The files from FOLDER_2 need to be inserted into the files from FOLDER_1, at the end of the file.
The output of this merged file will be placed in the MERGED_PDFs folder, retaining the name used to match the files in FOLDER_1 and FOLDER_2.
Example:
FOLDER_1: R000135322.PDF
FOLDER_2: R000135322.PDF
MERGED_PDFS: R000135322.PDF
(MERGED_PDFS contains a merged PDF from FOLDER_1 & FOLDER_2, with the PDF from FOLDER_2 being placed at the end of the PDF in FOLDER_1.
I saw some similar examples of this being done with PDFtk, but unsure how to edit to get my expected output.
Thanks
Here's what you need to do:
Install FolderMill
Specify the Incoming folder and the Output folder for FolderMill on your PC
Since you mention that files in FOLDER_1 and files in FOLDER_2 have the same filenames, just add "Convert to PDF" action and select Multipage: "Append pages to existing document" in the options.
Click Apply changes
Start FolderMill by pressing the Play button.
Grab the files from FOLDER_1 and put them into the Incoming folder
Grab the files from FOLDER_2 and do the same.
Receive the merged PDFs from the Output folder
If the you are not sure if all the corresponding files have the same filenames, you may also need to use the "Rename" action.
FYI, we have a detailed step-by-step guide how to do it (with screenshots).
You are welcome :)
Related
I have very little experience with the command line and I'm trying to do something very complicated (to me).
I have a directory with A LOT of subfolders and files in them. All file names contain the parent folder name, e.g.:
Folder1
data_Folder1.csv
other_file_Folder1.csv
Folder2
data_Folder2.csv
other_file_Folder2.csv
In another folder (all in one directory), I have a new version of all the data_FolderX.csv files and I need to replace them in the original folders. I cannot give them another name because of later analyses. Is there a way to replace the files in the original folders with the new version, in the command line?
I tried this Replacing a file into multiple folders/subdirectories but didn't work for me. Given that I have many .csv files in the derectories, I don't want to replace them all, so I don't think I should do it based on the file extension. I would also like to note that the name "FolderX" contains several other _, so in principal, I want to replace the .csv file starting with data in the FolderX.
Can anyone help?
Thanks in advance!
i have a problem, i used "everything" to extract every txt file from a specific directory so that i can merge them. But on emeditor i don't find a way to merge file from a list of localisation.
Here what the everything file look like:
E:\Main directory\subdirectory 1\file.txt
E:\Main directory\subdirectory 2\file.txt
E:\Main directory\subdirectory 3\file.txt
E:\Main directory\subdirectory 4\file.txt
The list goes over 40k location. is there a way to use a program to read all the location in the text file and combine them ?
Also, the subdirectory has other txt file that i don't want to so i can't just merge all txt file from the main. Another thing is that there are variation of the "file.txt" like "Files.txt" for example.
I would like some guide and help for this:
I have a text file with names listed in it. I will call it “source file”
I have multiple text files scattered in folders and sub folders
I would like to know how to make a script that would automatically add SPECIFIC LINE to every text file (in subfolders of chosen target folder) that contains exact name listed in “source file”
More detailed/Example:
I have a names.txt that contains many names. I want to find all the text files in target folder and it’s subfolders that contains names listed in names.txt and in those files automatically add “FALSE” line (in front or after specific existing line).
I 7Zip'd a multi-gig folder which contained many folders each with many files using the split to volumes (9Meg) option. 7Zip created files of type .zip.001,
.zip.002, etc. When I extract .001 it appears to work correctly but I get an 'unexpected end of data' error. 7Zip does not automatically go to .002. When I extract .002, it also gives the same error and it does not continue the original folder/file structure. Instead it extracts a zip file in the same folder as the previously extracted files. How do I properly extract split files to obtain the original folder/file structure? Thank you.
I have six folders like this >> Images
and each folder contains some images. I know how to read images in matlab BUT my question is how I can traverse through these folders and read images in abc.m file (this file is shown in this image)
So basically you want to read images in different folders without putting all of the images into one folder and using imread()? Because you could just copy all of the images (and name them in a way that lets you know which folder they came from) into a your MATLAB working directory and then load them that way.
Use the cd command to change directories (like in *nix) and then load/read the images as you traverse through each folder. You might need absolute path names.
The easiest way is certainly a right clic on the forlder in matlab and "Add to Path" >> "Selected Folders and Subfolders"
Then you can just get images with imread without specifying the path.
if you know the path to the image containing directory, you can use dir on it to list all the files (and directories) in it. Filter the files with the image extension you want and voila, you have an array with all the images in the directory you specified:
dirname = 'images';
ext = '.jpg';
sDir= dir( fullfile(dirname ,['*' ext]) );;
sDir([sDir.isdir])=[]; % remove directories
% following is obsolete because wildcarded dir ^^
b=arrayfun(#(x) strcmpi(x.name(end-length(ext)+1:end),ext),sDir); % filter on extension
sFiles = sDir(b);
You probably want to prefix the name of each file with the directory before using:
sFileName(ii) = fullfile(dirname, sFiles(ii));
You can process this resulting files as you want. Loading all the files for example:
for ii=1:numel(sFiles)
data{i}=imread(sFiles(ii).name)
end
If you also want to recurse the subdirectories, I suggest you take a look at:
How to get all files under a specific directory in MATLAB?
or other solutions on the FEX:
http://www.mathworks.com/matlabcentral/fileexchange/8682-dirr-find-files-recursively-filtering-name-date-or-bytes
http://www.mathworks.com/matlabcentral/fileexchange/15505-recursive-dir
EDIT: added Amro's suggestion of wildcarding the dir call