I am lazy, I am cheap and also care about the planet (to an extent). What I wanted to make was a quick and dirty script to put two PDF pages on one page, as the printing options in college are extremely restrictive and do not let you perform magic with putting multiple pages on a sheet of paper. This saves money and paper, a lot of it, considering the amount of past papers I’m dragging myself through! To this you might say “But Wiiiiiiillllll, why can’t you just read a PDF from your screen and not use paper at all?”. Honestly I have no reason why I prefer paper, maybe I’m just old school and stuck in the past. (Another reason for this “post” is that I also needed a very cheap excuse to update this neglected blog, but we will gloss over that for now.)
Much like in my post of (painfully) extracting line drawings from Simulink (Simulink Line Drawings), I am going to attempt to meddle with PDFs again.
In the interest of optimal laziness I use the command pdfnup
(part of the PDFjam package) to perform all the wizardry I need, and then I just run a for loop over all my documents.
Yes, I cheated by using a package to do it for me.
It’s that simple and I’m not even sure this is worthy of a blog post.
pdfnup
is used as follows: pdfnup -o "twopaged.pdf" "normal.pdf"
, where normal.pdf
is the file we’re condensing to two pages per page, and twopaged.pdf
is the output file name.
This is a strong part of my “optimal laziness” philosophy, why should I have to run pdfnup
twenty times and lose track of what I did in the process?
It’s far easier to automate myself out of the job.. again.
Say we have a folder called ./original/
whose contents are PDF files we wish to shrink to two pages per page.
We can list all the files in the folder with the command find ./original/
.
This can be assigned to a variable with files=$(find ./original/)
.
Due to the beautiful foresight of Kernighan, Ritchie and Thompson, the output is arranged with one filename per line.
Bash can easily iterate over this variable in a for loop, it understands the wonderful way our data is structered and the for loop is a simple:
for $file in $files;
do
stuff_here
done
To confirm we are actually working with a PDF and not to try muddling up random garbage that may exist in our directory, we can check our file is a PDF with an if statement:
filetype=".pdf"
if [[ $file = *${filetype} ]]; then
stuff_here
fi
Yes there are far superior ways to check if the file is a PDF, but I think it’s a bit much to spend too much time fiddling on a quick and dirty script - it goes against the theory of optimal laziness.
What bothers me a lot is when all my files get in a huge mess, so it’s only good manners to output the processed files in a neat way that we can handle.
For a given file in the for
loop, I ran pdfnup
as follows:
pdfnup -o ./condensed/$(basename $file) $file
./condensed/
with the same filename as the original PDF.
Given a file path, the name of the file can be obtained with the command basename
.
I use here a $()
to contain the command to output the raw text into the argument for the output for pdfnup
, which I’m sure you’ve seen littered enough throughout my bash scripts.
That’s all there really is to this tiny little script, avaliable right below this post.
I added a rather nice feature that if you run the maketwopagers
script in a given folder it creates a ./original/
folder and a ./condensed/
folder if they don’t exist!
Running the script again, with files in the ./original/
directory will condense them appropriately.
I also added a nifty “zip all the files” at the end, as it’s always nice to have an archive in case you need to share the files you freshly made.
Most current version avaliable in my dotfiles.
#!/usr/bin/env bash
filetype=".pdf"
if [ ! -d "./original/" ]; then
echo "Directory 'original' not found! Creating..."
mkdir -p ./original/
fi
if [ ! -d "./condensed/" ]; then
echo "Directory 'condensed' not found! Creating..."
mkdir -p ./condensed/
fi
files=`find ./original/`
for file in $files
do
if [[ $file = *${filetype} ]]; then
echo "Condensing $file"
# put two pages per page from the PDF
pdfnup -o ./condensed/$(basename $file) $file
fi
done
zip condensed_papers.zip ./condensed/*
zip original_papers.zip ./original/*