Bash script - Merging Pdfs [closed] - bash

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
hi i have a few thousand single paged pdf files in one folder and one single paged pdf(its file name is 2.pdf). I want to merge all the pdfs in the folder with the 2.pdf file. at the end should have all the pdfs in the folder with 2pages and the second page being the contents of 2.pdf. Please assist on this. thanks

So for all PDF files of the current directory, except 2.pdf, we merge x.pdf with 2.pdf as a new file called new-x.pdf. So we can use the command given in merge / convert multiple pdf files into one pdf to do this:
cover="2.pdf";
outputDir="output-pdfs/";
mkdir "$outputDir";
for f in *.pdf; do
[[ "$f" == "$cover" ]] && continue # skip the cover PDF
pdfunite "$f" "$cover" output-pdfs/new-"$f".pdf;
done

Related

How to extract a specific list of files from a folder in Windows? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a folder of around 3,000 music files of all the same type (.flac).
I made an excel (and .txt) list of around 1,000 files in that folder that I want to move to a different folder.
Is there a way to accomplish this without having to manually move each file by referencing the list?
Thank you!
First make a backup of everything!
Yes this is indeed possible, I wrote a little python script for you:
import os
f = open("whichtomove.txt", "r")
filelist = f.read().split("\n")
for x in range(0, len(filelist)):
os.rename(("fromhere/" + str(filelist[x])),("tohere/" + str(filelist[x])))
You just have to change the folder paths and I assumed the list is in this format:
file1.flac
tihs.flac
If the format in your format is another you just have to change the split operator, e.g. to ";" if you split the list entries with ';'

Reading all files inside a folder with bash [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
The problem is to read all files inside a folder and apply certain functions specified in the bash script over each file. Somewhat like calling map() in an object in JavaScript, but in this case with bash.
you just need to loop over the files. Assume your pwd is that folder:
for file in *.txt; do
do your stuff here with "$file" ...
done

Create an applescript / automator workflow that excludes certain file names [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I want to make an automator folder action that excludes file names with certain words in them, eg:
move all files to /folder x except those containing the words "Screen Shot"
The actual actions work fine I just need the script or automator actions to exclude the files.
You should be able to use “Filter Finder Items” and set the criteria to be “None of the following are true”: “Name contains Screen Shot”. I just did a quick test with “Choose a Folder”, “Get Folder Contents” and then “Filter Finder Items” and the Results contain only files that do not contain the word I specified.

Downloading Images with "illegal" characters [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am migrating a shop over for a client.
I have to pull all the old image files off her 'shop' which has no FTP access.
It allowed me to export a list of filenames/urls. My plan was to load them up in Firefox and use "Downloadthemall" to simply download all the files. (Around 2000). However about 1 1/3 have [ and ] in.
i.e.
cdn.crapshop.com/images/image[1].jpg
Downloadthemall freaks out and only reads it as
cdn.crapshop.com/images/image
And won't download it because it isn't a file.
Anyone got any ideas of an alternative way to pull a list like this?
See this solution that explains why the example URL you provided is invalid: Validation. After you look at that post you'll see that, in the answer provided by #good, you have to encode characters that are not according to the specification using percent encoding, so the webserver will understand them.
This calls for python... see this post: Percent encoding in python
And then we can put it all together in a script, which you will use to read from stdin and output to stdout: python script.py < input > output.out.
import urllib, sys
while 1:
try:
line = sys.stdin.readline()
except KeyboardInterrupt:
break
if not line:
break
print urllib.quote(line.strip(), safe=':').strip('\'')
Then, hopefully, download them all will parse that list of files (the input to that script is supposed to be a list of url's separated by a newline) that have been corrected by the script.
You may be interested in this post as well: Downloading files with python. Which shows you how to download files (web pages in particular) using python.
Good luck!

How to convert .nii to .nii.gz file? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have lots of .nii file. I want to know how to convert .nii file to .nii.gz file?
Thanks
As far as I know, there is nothing special about zipping NIfTI files. In MATLAB, you could simply do:
gzip('niftifilename.nii') % this will return niftifilename.nii.gz
gzip('*.nii') % for multiple nii files to create one .nii.gz
To work with the file again, you can unzip it, using gunzip. I've tried this on my Mac (don't know if this will work on Windows).
Typically, they are volume data, and hence take up a fair bit of disk space. Zipping it is purely for reducing the size of the file, and should not modify data.
You can simply do:
gzip({'*.nii'},outputdir)
Which will zip all your nii files into a nii.gz and place it into outputdir.
From the documentation:
To gzip all .m and .mat files in the current directory and store the
results in the directory archive, type:
gzip({'.m','.mat'},'archive')

Resources