I need an automation for PDFs that is a variant of multiple pages per sheet. In this case, I don't need a simple two-pages-per-sheet solution, that's easy. I need to take hand-written notes side by side to the pages. So, here it goes:
Given a PDF, I'd like to print it with two pages per sheet, however, one page must be blank, like this:
+-------+-------+
| P.1 | white |
| | |
| | |
+-------+-------+
+-------+-------+
| P.2 | white |
| | |
| | |
+-------+-------+
etc.
Has anyone an idea to write a script that can automate this?
PS. I know how to do this in LaTeX, but I'd like to avoid the big gun...
If avoiding LaTeX does not mean avoiding usage of any tools that depend on it, then PDFJam (Debian package is texlive-extra-utils) could be of help, see q/a: Gluing (Imposition) PDF documents.
Otherwise you are probably better off with a little script that converts .pdf file pages to images and then merges them with a blank image, ImageMagick is able to do those things.
With Ubuntu:
# install packages
sudo apt-get install enscript ghostscript pdfjam pdftk
source="source.pdf"
output="output.pdf"
# create ps with one blank page
echo -n | enscript -p blank.ps
# convert p2 to pdf
ps2pdf blank.ps blank.pdf
# get number of pages of $source
num=$(pdftk "$source" dump_data | grep -Po 'NumberOfPages: \K.*')
# create string with new page numbers
for ((i=1;i<=$num;i++)); do pages="$pages A$i-$i B1-1"; done
# create pdf with white pages
pdftk A="$source" B=blank.pdf cat $pages output tmp.pdf
# create pdf with two pages on one side
pdfjam tmp.pdf --nup 2x1 --landscape --outfile "$output"
# clean up
rm blank.ps blank.pdf tmp.pdf
I have a solution which does not print exactly the layout which you want, but prints the page centered in the landscape sheet, like so:
+---+-------+----+
| | P.1 | |
| | | |
| | | |
+---+-------+----+
+---+-------+----+
| | P.2 | |
| | | |
| | | |
+---+-------+----+
If you're goal is to create free space for hand annotations, this layout might be better since it lets you write the annotation closer to the printed text.
The following script relies on pdfjam which uses LaTeX under the hood. Probably adding a few more command line arguments for pdfjam would get exactly what you are looking for.
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "usage: $0 PDF_filename..."
echo
echo "This script takes a PDF file as command line arguments,"
echo "and generates a new, landscape-formatted PDF file, where every "
echo "page has very large margins which may be useful for editorial notes"
echo
echo "Requires: pdfjam, which is installed by the apt-get package texlive-extra-utils"
exit 1
fi
command -v pdfjam >/dev/null 2>&1 || { echo >&2 "I require pdfjam but it's not installed. Do an apt install of texlive-extra-utils to get it on Ubuntu. Aborting."; exit 1; }
pdfjam --batch --nup 1x1 --suffix widemargin --landscape "$#"
Related
I admit to being a novice in bash script, but can't quite seem to figure out how to accomplish a key step in a script and couldn't quite find what I was looking for in other threads.
I am trying to extract some specific data (numerical values) from multiple .xml files and add those to a space or tab delimited text file. The files will be generated over time so I need a way to append a new dataset to the pre-existing text file.
For instance, I would like to extract values for 3 different categories, 1 per row or column, and the value for each category from multiple xml files. Basically, I want to build a continuous graph of the data from each of 3 categories over time.
I have the following code which will successfully extract the 3 numbers from the xml file and trim the unnecessary text:
#!/bin/sh
grep "<observation name=\"meanGhost\" type=\"float\">" "/Users/Erik/MRI/PHANTOM/2/phantom_qa/summaryQA.xml" \
| sed 's/<observation name=\"meanGhost\" type=\"float\">//g' \
| sed 's/<\/observation>//g' >> $HOME/Desktop/testxml.txt
grep "<observation name=\"meanBrightGhost\" type=\"float\">" "/Users/Erik/MRI/PHANTOM/2/phantom_qa/summaryQA.xml" \
| sed 's/<observation name=\"meanBrightGhost\" type=\"float\">//g' \
| sed 's/<\/observation>//g' >> $HOME/Desktop/testxml.txt
grep "<observation name=\"std\" type=\"float\">" "/Users/Erik/MRI/PHANTOM/2/phantom_qa/summaryQA.xml" \
| sed 's/<observation name=\"std\" type=\"float\">//g' \
| sed 's/<\/observation>//g' >> $HOME/Desktop/testxml.txt
This gives the output:
1.12
0.33
134.1
I would like to then read in another xml file to get:
1.12 1.45
0.33 0.54
134.1 144.1
I would be grateful for any help with doing this! Thanks in advance.
Erik
It's much safer to use proper XML handling tools. For example, in xsh, you can write something like
$f1 := open /Users/Erik/MRI/PHANTOM/2/phantom_qa/summaryQA.xml ;
$f2 := open /path/to/the/second/file.xml ;
echo ($f1 | $f2)//observation[#name="meanGhost"] ;
echo ($f1 | $f2)//observation[#name="meanBrightGhost"] ;
echo ($f1 | $f2)//observation[#name="std"] ;
Does anyone know of any possible way to determine or glean this information from the terminal (in order to use in a bash shell script)?
On my Macbook Air, via the GUI I can go to "About this mac" > "Displays" and it tells me:
Built-in Display, 13-inch (1440 x 900)
I can get the screen resolution from the system_profiler command, but not the "13-inch" bit.
I've also tried with ioreg without success. Calculating the screen size from the resolution is not accurate, as this can be changed by the user.
Has anyone managed to achieve this?
I think you could only get the display model-name which holds a reference to the size:
ioreg -lw0 | grep "IODisplayEDID" | sed "/[^<]*</s///" | xxd -p -r | strings -6 | grep '^LSN\|^LP'
will output something like:
LP154WT1-SJE1
which depends on the display manufacturer. But as you can see the first three numbers in this model name string imply the display-size: 154 == 15.4''
EDIT
Found a neat solution but it requires an internet connection:
curl -s http://support-sp.apple.com/sp/product?cc=`system_profiler SPHardwareDataType | awk '/Serial/ {print $4}' | cut -c 9-` |
sed 's|.*<configCode>\(.*\)</configCode>.*|\1|'
hope that helps
The next script:
model=$(system_profiler SPHardwareDataType | \
/usr/bin/perl -MLWP::Simple -MXML::Simple -lane '$c=substr($F[3],8)if/Serial/}{
print XMLin(get(q{http://support-sp.apple.com/sp/product?cc=}.$c))->{configCode}')
echo "$model"
will print for example:
MacBook Pro (13-inch, Mid 2010)
Or the same without perl but more command forking:
model=$(curl -s http://support-sp.apple.com/sp/product?cc=$(system_profiler SPHardwareDataType | sed -n '/Serial/s/.*: \(........\)\(.*\)$/\2/p')|sed 's:.*<configCode>\(.*\)</configCode>.*:\1:')
echo "$model"
It is fetched online from apple site by serial number, so you need internet connection.
I've found that there seem to be several different Apple URLs for checking this info. Some of them seem to work for some serial numbers, and others for other machines.
e.g:
https://selfsolve.apple.com/wcResults.do?sn=$Serial&Continue=Continue&num=0
https://selfsolve.apple.com/RegisterProduct.do?productRegister=Y&country=USA&id=$Serial
http://support-sp.apple.com/sp/product?cc=$serial (last 4 digits)
https://selfsolve.apple.com/agreementWarrantyDynamic.do
However, the first two URLs are the ones that seem to work for me. Maybe it's because the machines I'm looking up are in the UK and not the US, or maybe it's due to their age?
Anyway, due to not having much luck with curl on the command line (The Apple sites redirect, sometimes several times to alternative URLs, and the -L option doesn't seem to help), my solution was to bosh together a (rather messy) PHP script that uses PHP cURL to check the serials against both URLs, and then does some regex trickery to report the info I need.
Once on my web server, I can now curl it from the terminal command line and it's bringing back decent results 100% of the time.
I'm a PHP novice so I won't embarrass myself by posting the script up in it's current state, but if anyone's interested I'd be happy to tidy it up and share it on here (though admittedly it's a rather long winded solution to what should be a very simple query).
This info really should be simply made available in system_profiler. As it's available through System Information.app, I can't see a reason why not.
Hi there for my bash script , under GNU/Linux : I make the follow to save
# Resolution Fix
echo `xrandr --current | grep current | awk '{print $8}'` >> /tmp/width
echo `xrandr --current | grep current | awk '{print $10}'` >> /tmp/height
cat /tmp/height | sed -i 's/,//g' /tmp/height
WIDTH=$(cat /tmp/width)
HEIGHT=$(cat /tmp/height)
rm /tmp/width /tmp/height
echo "$WIDTH"'x'"$HEIGHT" >> /tmp/Resolution
Resolution=$(cat /tmp/Resolution)
rm /tmp/Resolution
# Resolution Fix
and the follow in the same script for restore after exit from some app / game
in some S.O
This its execute command directly
ResolutionRestore=$(xrandr -s $Resolution)
But if dont execute call the variable with this to execute the varible content
$($ResolutionRestore)
And the another way you can try its with the follow for example
RESOLUTION=$(xdpyinfo | grep -i dimensions: | sed 's/[^0-9]*pixels.*(.*).*//' | sed 's/[^0-9x]*//')
VRES=$(echo $RESOLUTION | sed 's/.*x//')
HRES=$(echo $RESOLUTION | sed 's/x.*//')
I'm trying to write a bash script that determines whether a RAR archive has more than one root file.
The unrar command provides the following type of output if I run it with the v option:
[...#... dir]$ unrar v my_archive.rar
UNRAR 4.20 freeware Copyright (c) 1993-2012 Alexander Roshal
Archive my_archive.rar
Pathname/Comment
Size Packed Ratio Date Time Attr CRC Meth Ver
-------------------------------------------------------------------------------
file1.foo
2208411 2037283 92% 08-08-08 08:08 .....A. 00000000 m3g 2.9
file2.bar
103 103 100% 08-08-08 08:08 .....A. 00000000 m0g 2.9
baz/file3.qux
9911403 9003011 90% 08-08-08 08:08 .....A. 00000000 m3g 2.9
-------------------------------------------------------------------------------
3 12119917 11040397 91%
and since RAR is proprietary I'm guessing this output is as close as I'll get.
If I can get just the file list part (the lines between ------), and then perhaps filter out all even lines or lines beginning with multiple spaces, then I could do num_root_files=$(list of files | cut -d'/' -f1 | uniq | wc -l) and see whether [ $num_root_files -gt 1 ].
How do I do this? Or is there a saner approach?
I have searched for and found ways to grep text between two words, but then I'd have to include those "words" in the command, and doing that with entire lines of dashes is just too ugly. I haven't been able to find any solutions for "grep text between lines beginning with".
What I need this for is to decide whether to create a new directory or not before extracting RAR archives.
The unrar program does provide the x option to extract with full path and e for extracting everything to the current path, but I don't see how that could be useful in this case.
SOLUTION using the accepted answer:
num_root_files=$(unrar v "$file" | sed -n '/^----/,/^----/{/^----/!p}' | grep -v '^ ' | cut -d'/' -f1 | uniq | wc -l)
which seems to be the same as the shorter:
num_root_files=$(unrar v "$file" | sed -n '/^----/,/^----/{/^----/!p}' | grep -v '^ ' | grep -c '^ *[^/]*$')
OR using 7z as mentioned in a comment below:
num_root_files=$(7z l -slt "$file" | grep -c 'Path = [^/]*$')
# check if value is gt 2 rather than gt 1 - the archive itself is also listed
Oh no... I didn't have a man page for unrar so I looked one up online, which seems to have lacked some options that I just discovered with unrar --help. Here's the real solution:
unrar vb "$file" | grep -c '^[^/]*$'
I haven't been able to find any solutions for "grep text between lines
beginning with".
In order to get the lines between ----, you can say:
unrar v my_archive.rar | sed -n '/^----/,/^----/{/^----/!p}'
I recently accidentally formatted a 2TB hard drive mac os jounaled!
I was able to recover files with Data Rescue 3, the only problem is the program didn't gave me the files as they were, root tree, and name.
For example I had
|-Music
||-Enya
|||-Sonadora.mp3
|||-Now we are free.mp3
|-Documents
||-CV.doc
||-LetterToSomeone.doc
...and so on
And now I got
|-MP3
||-M0001.mp3
||-M0002.mp3
|-DOCUMENTS
||-D0001.doc
||-D0002.doc
So with a huge amount of data it would take me centuries to manually open, see what is it and rename.
Is there some batch which can scan all my subfolders and take the previous name? By metadata perhaps?
Or do you know a better tool which will keep the same name and path of files (doesn't matter if must pay, ther's always a solution for that :P)
Thank you
My contribution for you music at least...
The idea is to go through all of the MP3 files found, and distributed them based on their ID3 tags.
I'd do something like :
for i in `find /MP3 -type f -iname "*.mp3"`;
do
ARTIST=`id3v2 -l $i | grep TPE1 | cut -d":" -f2 | sed -e 's/^[[:space:]]*//'`; # This gets you the Artist
ALBUM=`id3v2 -l $i | grep TALB | cut -d":" -f2 | sed -e 's/^[[:space:]]*//'`; # This gets you the Album title
TRACK_NUM=`id3v2 -l $i | grep TRCK | cut -d":" -f2 | sed -e 's/^[[:space:]]*//'`; # This gets the track ID/position, like "2/13"
TR_TITLE=`id3v2 -l $i | grep TIT2 | cut -d":" -f2 | sed -e 's/^[[:space:]]*//'`; # Track title
mkdir -p /MUSIC/$ARTIST/$ALBUM/;
cp $i /MUSIC/$ARTIST/$ALBUM/$TRACK_NUM.$TR_TITLE.mp3
done
Basically :
* It looks for all ".mp3" files in /MP3
* then analyses each file's ID3 tags, and parses them to fill 4 variables, using "id3v2" tool (you'll need to install it first). The tags are cleaned to get only the value, sed is used to trim the leading spaces that might pollute.
* then creates (if needed), a tree in /MUSIC/ with Artist name and album name
* then copies the input files to the new tree, and renames it thanks to the tags.
I want to apply a watch command on a mysql query every N seconds, but would like to have the results on the bottom left of the terminal instead of the top left:
watch -n 120 "mysql_query" | column -t"
Shows my results like so:
--------------------------
|xxxxxxxxxxx |
|xxxxxxxxxxx |
|xxxxxxxxxxx |
| |
| |
--------------------------
Whereas I would like them to have like so:
--------------------------
| |
| |
|xxxxxxxxxxx |
|xxxxxxxxxxx |
|xxxxxxxxxxx |
--------------------------
Suggestion?
I don't see a straight-forward way to do this, but I managed to force it to work using the following approach. I haven't fully tested this so I cannot guarantee that this will work in all situations.
Using this script:
#!/bin/bash
TERM_HEIGHT=`tput lines` # determine terminal height
WATCH_BANNER_HEIGHT=2 # account for the lines taken up by the header of "watch"
let VIS_LINES="TERM_HEIGHT - WATCH_BANNER_HEIGHT" # height of visible area
(yes " " | head -n $VIS_LINES; cat | head -n $VIS_LINES) | tail -n $VIS_LINES
Post process the output of your command as it is called by watch e.g. (assuming the script was saved as align_bottom, made executable, and store somewhere within your $PATH):
watch -n 120 "mysql_query | column -t | align_bottom"
What the script does:
Determine the height (number of lines) of the terminal
Calculate the visible area of the watch output
Print blank lines to pad the output (pushing the output down)
Read in output from stdin, and trim it so we only show the top of the output if it extends beyond the screen. If you want to see the bottom of the output instead, simple remove the head command after cat.
tail the output of steps (3) and (4) so excess padding is removed and the final output fits snugly within watch
I have to admit this seems a little hackish, but hopefully it gets you closer to what you're trying to achieve.
Update:
It should also be possible to implement that as a function instead just so it can sit comfortably in .bashrc.
function align_bottom() {
(( VIS = $(tput lines) - 2 )) # height of visible area
(yes " " | head -n $VIS; cat | head -n $VIS) | tail -n $VIS
}
typeset -fx align_bottom # !! make it callable from subshell
Usage would be the same:
watch -n 120 "mysql_query | column -t | align_bottom"
Note that watch runs the given command using sh -c, therefore, as Dennis pointed out in the comments, on systems that does not link /bin/sh to /bin/bash the function approach shown above will not work.
It is possible to make it work usign:
watch -n 120 "mysql_query | column -t | bash -c align_bottom"
but for portability and usability, it's cleaner to simply use the shell script approach.
I don't know if watch can do that, but what I'd do is use another tool to have multiple terminals and resize the one in which watch is running according to my needs.
A couple of these tools that can be useful are:
screen
byobu (screen with some enhancements)
terminator
I hope this helps.