BASH multiple nested IFs inside DO - bash

The script I am trying to write needs to do the following
Go to specific directory and find all files with the *.mp4 extension that is newer than 30 minutes and move them into a variable.
All files within the variable needs to be uploaded to an FTP and an email notification goes out. The next if will need to look through the files and find all files with the extension of *1280x720_3500_h32.mp4 these files will then be uploaded to a different FTP server.
Lastly the script needs to move all files to a separate location on the network for archiving.
The process then needs to run every 30 minutes. Which is why I have added a separate script that runs the 30 minute sleep process and calls the initial script. So in effect I have a Upload script and a sleep script. So two questions springs to mind:
I am not sure about the 2nd IF where the script is looking for all the *1280x720_3500_h32.mp4 files - will it upload all files with this file name that is newer than 30 minutes
Race conditions where the script will try to upload files that have already been processed or will skip files?
Sleep script:
#!/bin/bash
ScripLocal='/Users/jonathan/scripts/Event_Scripts'
while true
do
echo "running Upload script: find_newerthan_mv_toFtp_v2.sh"
cd $ScripLocal
. find_newerthan_mv_toFtp_v2.sh
echo "finished running script: ffind_newerthan_mv_toFtp_v2.sh - next run is in 30 minutes"
sleep 30m
done
Upload Script:
#!/bin/bash
#File Origin
GSPORIGIN='/Volumes/HCHUB/Episode/Watchfolders/Output/Profile_01/'
#To location on the SAN where the files are being stored
DESTDIRSAN='/Volumes/HCHUB/Digitalmedia/online_encodes'
# 1st FTP Details
HOST='ftp.upload01.com'
USER='xxxx'
PASSWD='xxxx'
DESTDIR="/8619/_!/intlod/upload/"
#2nd FTP details
HOST01='ftp.upload02.com'
USER01='xxxx'
PASSWd01='xxxx'
cd $GSPORIGIN
for file in `find . -type f -name "*.mp4" -mmin -30`
do
echo $file
if [ -f $file ] ; then
echo "Uploading to FTP 01"
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd $DESTDIR
mput $file
EOT
echo "$file has been copied to FTP 01" | mail -s "$file has been copied to FTP 01 in Directory $DESTDIR" xxxx#xxxx.com xxxx#xxxx.com xxxx#xxxx.com;
if [[ $file == *1280x720_3500_h32.mp4 ]] ; then
echo "Uploading FTP 02 version"
ftp -n -v $HOST01 << EOT
ascii
user $USER01 $PASSWD01
prompt
mput $file
EOT
echo "$file has been copied to FTP 02" | mail -s "$file has been copied to FTP 02" xxxx#xxxx.com;
fi
echo "moving files to SAN $DESTDIRSAN"
mv -v $file $DESTDIRSAN
else exit 1
fi
done

If you're worried about race conditions, don't use FTP. FTP is a very, very bad protocol for data exchange - it doesn't give you proper error messages when something goes wrong, the exit code is always 0 (even when something didn't work), there is no protection against duplicate or missed uploads, etc.
Use rsync instead. rsync will start writing the target file under a temporary name (so the receiver won't see it until the upload is complete), it can resume an upload (important for big files), it uses checksums to make sure the upload is correct and it fails with a non-null error code if anything is wrong.
On top of that, it can use ssh to secure the connection so you don't have to send plain text passwords over the network (which is a very bad idea today even in an Intranet).
rsync has powerful filtering options (what to upload) and can it can archive any processed file so you won't upload something twice. It can even detect when a user uploads a file a second time and fail (or silently skip if the file is 100% identical).
To make sure that you don't run two uploads at the same time, use a lock:
mkdir .lock || { echo "Can't lock - process is already running" ; exit 1 ; }
trap "rmdir .lock" EXIT
You might also want to have a look at FTAPI which does all this and more.

Use crontab, not sleeping threads. sleep is prone to failure, and nohupping is a hassle anyway.
Use rsync instead of looking for changes yourself, if you at all can. Also, you don't need to script rsync like you do with ftp.

Got the python variation of the script working (thank you Daniel!!).
#!/usr/bin/python
# Created in a rush, on the 31st October
# HALLOWEEN SPECIAL - QUICK AND DIRTY
import os
import shutil
from ftplib import FTP
# File Origin
ENCODE_OUTPUT='xxxxxxx'
# To location on the SAN where the files are being cached on the network
CACHE_DIR='xxxxxxxx'
#ENCODE_OUTPUT='encode/'
#CACHE_DIR='cache/'
#FTP01 FTP Details
FTP01_HOST='upload.FTP01.com'
FTP01_USER='xxxxx'
FTP01_PASSWD='xxxxxxx'
FTP01_DESTDIR="dir/dir/"
#Fingerprinting FTP details
FINGERPRINT_HOST='ftp.ftp02.com'
FINGERPRINT_USER='xxxxx'
FINGERPRINT_PASSWD='xxxxxx'
FINGERPRINT_DEST_DIR='/'
dirList = os.listdir(ENCODE_OUTPUT)
dirList.sort()
# Move the files out of the encoded located and into the cache dir
for video_file in dirList:
print "Moving file %s to %s" % (video_file, CACHE_DIR)
shutil.move(ENCODE_OUTPUT + video_file, CACHE_DIR)
# Upload the file to FTP01
print "Logging into %s FTP" % FTP01_HOST
ftp = FTP(FTP01_HOST)
ftp.login(FTP01_USER, FTP01_PASSWD)
ftp.cwd(FTP01_DESTDIR)
print "Uploading file %s to %s" % (video_file, FTP01_HOST+FTP01_DESTDIR)
ftp.storbinary('STOR ' + FTP01_DESTDIR+video_file, open(CACHE_DIR+video_file, "rb"), 1024)
# Close the connection
ftp.close
# If the file ands in "1280x720_3500_h32.mp4" copy it to the finger printing service
if video_file[-21:]=="1280x720_3500_h32.mp4":
# Upload the file to the fingerprinting service
print "File is HD. Logging into %s FTP" % FINGERPRINT_HOST
ftp = FTP(FINGERPRINT_HOST)
ftp.login(FINGERPRINT_USER, FINGERPRINT_PASSWD)
ftp.cwd(FINGERPRINT_DEST_DIR)
print "Uploading file %s to %s" % (video_file, FINGERPRINT_HOST)
ftp.storbinary('STOR ' + FINGERPRINT_DEST_DIR+video_file, open(CACHE_DIR+video_file, "rb"), 1024)
ftp.close

Related

Extracting certain files from a tar archive on a remote ssh server

I am running numerous simulations on a remote server (via ssh). The outcomes of these simulations are stored as .tar archives in an archive directory on this remote server.
What I would like to do, is write a bash script which connects to the remote server via ssh and extracts the required output files from each .tar archive into separate folders on my local hard drive.
These folders should have the same name as the .tar file from which the files come (To give an example, say the output of simulation 1 is stored in the archive S1.tar on the remote server, I want all '.dat' and '.def' files within this .tar archive to be extracted to a directory S1 on my local drive).
For the extraction itself, I was trying:
for f in *.tar; do
(
mkdir ../${f%.tar}
tar -x -f "$f" -C ../${f%.tar} "*.dat" "*.def"
)
done
wait
Every .tar file is around 1GB and there is a lot of them. So downloading everything takes too much time, which is why I only want to extract the necessary files (see the extensions in the code above).
Now the code works perfectly when I have the .tar files on my local drive. However, what I can't figure out is how I can do it without first having to download all the .tar archives from the server.
When I first connect to the remote server via ssh username#host, then the terminal stops with the script and just connects to the server.
Btw I am doing this in VS Code and running the script through terminal on my MacBook.
I hope I have described it clear enough. Thanks for the help!
Stream the results of tar back with filenames via SSH
To get the data you wish to retrieve from .tar files, you'll need to pass the results of tar to a string of commands with the --to-command option. In the example below, we'll run three commands.
# Send the files name back to your shell
echo $TAR_FILENAME
# Send the contents of the file back
cat /dev/stdin
# Send EOF (Ctrl+d) back (note: since we're already in a $'' we don't use the $ again)
echo '\004'
Once the information is captured in your shell, we can start to process the data. This is a three-step process.
Get the file's name
note that, in this code, we aren't handling directories at all (simply stripping them away; i.e. dir/1.dat -> 1.dat)
you can write code to create directories for the file by replacing the forward slashes / with spaces and iterating over each directory name but that seems out-of-scope for this.
Check for the EOF (end-of-file)
Add content to file
# Get the files via ssh and tar
files=$(ssh -n <user#server> $'tar -xf <tar-file> --wildcards \'*\' --to-command=$\'echo $TAR_FILENAME; cat /dev/stdin; echo \'\004\'\'')
# Keeps track of what state we're in (filename or content)
state="filename"
filename=""
# Each line is one of these:
# - file's name
# - file's data
# - EOF
while read line; do
if [[ $state == "filename" ]]; then
filename=${line/*\//}
touch $filename
echo "Copying: $filename"
state="content"
elif [[ $state == "content" ]]; then
# look for EOF (ctrl+d)
if [[ $line == $'\004' ]]; then
filename=""
state="filename"
else
# append data to file
echo $line >> <output-folder>/$filename
fi
fi
# Double quotes here are very important
done < <(echo -e "$files")
Alternative: tar + scp
If the above example seems overly complex for what it's doing, it is. An alternative that touches the disk more and requires to separate ssh connections would be to extract the files you need from your .tar file to a folder and scp that folder back to your workstation.
ssh -n <username>#<server> 'mkdir output/; tar -C output/ -xf <tar-file> --wildcards *.dat *.def'
scp -r <username>#<server>:output/ ./
The breakdown
First, we'll make a place to keep our outputted files. You can skip this if you already know the folder they'll be in.
mkdir output/
Then, we'll extract the matching files to this folder we created (if you don't want them to be in a different folder remove the -C output/ option).
tar -C output/ -xf <tar-file> --wildcards *.dat *.def
Lastly, now that we're running commands on our machine again, we can run scp to reconnect to the remote machine and pull the files back.
scp -r <username>#<server>:output/ ./

Reading Through A List of Files, then Sending those Files via FTP

I am making weather model charts with the Grads scripting language, and I am using a bash script so I can use a while loop to download model data (in grib2 format) and call the grads scripts for each frame on the model run. Right now, I have a loop that runs through all the scripts for a given forecast hour and uploads the image output via FTP. After this for loop completes, the grib2 data for the next hour is downloaded, and the loop runs again.
for ((i=0;i<${#SCRIPTS[#]};i++)); do
#define filename
FILENAME="${FILENAMES[i]}${FORECASTHOUR}hrfcst.png"
#run grads script
/home/mint/opengrads/Contents/opengrads -lbc "run /home/mint/opengrads/Contents/plotscripts/${SCRIPTS[i]} $CTLFILE $INIT_STRINGDATE $INIT_INTDATE $INITHOUR $FILENAME $h $MODEL $MODELFORTITLE 500"
#run ftp script
#sh /home/mint/opengrads/Contents/bashscripts/ftpsample.sh $INIT_INTDATE $INITHOUR $FILENAME $MODEL
done
This is inelegant because I open and close an FTP session each time I send a single image. I would much rather write the names of the filenames for a given forecast hour to a .txt file (ex: have a "echo ${FILENAME} >> FILEOFFILENAMES.txt" in the loop) and have my FTP script read and send all those files in a single session. Is this possible?
It's possible. You can add this to your shell script to generate the ftp script and then have it run after you've generated the files:
echo open $HOST > ftp.txt
echo user $USER $PASS >> ftp.txt
find . -type f -name '*hrfcst.png' -printf "put destination/%f %f\n" >> ftp.txt
echo bye >> ftp.txt
ftp < ftp.txt
The above code will generate file ftp.txt with commands and pass that to ftp. The generated ftp.txt will look like:
open host
user user pass
put destination/forecast1.hrfcst.png forecast1.hrfcst.png
put destination/forecast2.hrfcst.png forecast2.hrfcst.png
put destination/forecast3.hrfcst.png forecast3.hrfcst.png
...
bye
The following script will upload all files added today from local directory to remote ftp directory.
#!/bin/bash
HOST='hostname'
USER='username'
PASSWD='password'
# Local directory where the files are stored.
cd "/local/directory/from where to upload files/"
# To get all the files added today only.
TODAYSFILES=`find -maxdepth 1 -type f -mtime -1`
# remote server directory to upload backup
REMOTEDIR="/directory on remote ftp computer/"
for FILENAME in ${TODAYSFILES[#]}; do
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd $REMOTEDIR
put $FILENAME
bye
EOT
done

Monitor and ftp newly-added files on Linux -- modify existing code

The OS is centos 7, I have a small application to implement below functionality:
1.Read information from config.ini like this:
# Configuration file for ftpxml service
# Remote FTP server informations
ftpadress=1.2.3.4
username=test
password=test
# Local folders configuration
# folderA: folder for incomming files
folderA=/u02/dump
# folderB: Successfuly transfered files are copied here
folderB=/u02/dump_bak
# retrydir: when ftp upload fails, store failed files in this
# directory
retrydir=/u02/dump_retry
Monitor folder A. If there are any newly-added files in A, do step 3.
Ftp these new files to a remote ftp server in the order of their creation time, While upload finished, copy uploaded files to folder B and delete relevant files in folder A.
If ftp fails, store relevant files in retrydir and try to ftp them later.
Record every operation in a log file.
Detailed setting instruction for the application:
install ncftp package: yum install ncftp -y, it's not a service nor a daemon, just a client tool which is invoked in bash file for ftp purpose.
Customize these files to suit your setting using vi: config.ini
,ftpmon.path and ftpmon.service
copy ftpmon.path and ftpmon.service to /etc/systemd/system/, copy config.ini and ftpxml.sh to /u02/ftpxml/, run: chmod +x ftpxml.sh
Start the monitoring tool
sudo systemctl start ftpmon.path
If you want to enable it at boot time just enter: sudo systemctl enable ftpmon.path
Setup a cron task to purge queued files (add option -p)
*/5 * * * * /u02/ftpxml/ftpxml.sh -p
Now the application seems works well, except a special situation:
When we put several files in folder A continuously, for instance, put 1.txt, 2.txt and 3.txt...... one after another in a short time, we usually found 1.txt ftp well, but the upcoming files fails to ftp and still stay under folder A.
Now I am going to fix this problem. I suppose the error maybe due to: while doing ftp for the first file, maybe the second file is already created under folder A. so the code can't care about the second file.
Below is code of ftpxml.sh:
#!/bin/bash
# ! Please read the README.txt file !
# Copy files from upload dir to remote FTP then save them
# to folderB
# look our location
SCRIPT=$(readlink -f $0)
# Absolute path to this script
SCRIPTPATH=`dirname $SCRIPT`
PIDFILE=${SCRIPTPATH}/ftpmon_prog.lock
# load config.ini
if [ -f $SCRIPTPATH/config.ini ]; then
source $SCRIPTPATH/config.ini
else
echo "No config found. Exiting"
fi
# Lock to avoid multiple instances
if [ -f $PIDFILE ]; then
kill -0 $(cat $PIDFILE) 2> /dev/null
if [ $? == 0 ]; then
exit
fi
fi
# Store PID in lock file
echo $$ > $PIDFILE
# Parse cmdline arguments
while getopts ":ph" opt; do
case $opt in
p)
#we set the purge mode (cron mode)
purge_only=1
;;
\?|h)
echo "Help text"
exit 1
;;
esac
done
# declare usefull functions
# common logging function
function logmsg() {
LOGFILE=ftp_upload_`date +%Y%m%d`.log
echo $(date +%m-%d-%Y\ %H:%M:%S) $* >> $SCRIPTPATH/log/${LOGFILE}
}
# Upload to remote FTP
# we use ncftpput to batch silently
# $1 file to upload $2 return value placeholder
function upload() {
ncftpput -V -u $username -p $password $ftpadress /prog/ $1
return $?
}
function purge_retry() {
failed_files=$(ls -1 -rt ${retrydir}/*)
if [ -z $failed_files ]; then
return
fi
while read line
do
#upload ${retrydir}/$line
upload $line
if [ $? != 0 ]; then
# upload failed we exit
exit
fi
logmsg File $line Uploaded from ${retrydir}
mv $line $folderB
logmsg File $line Copyed from ${retrydir}
done <<< "$failed_files"
}
# first check out 'queue' directory
purge_retry
# if called from a cron task we are done
if [ $purge_only ]; then
rm -f $PIDFILE
exit 0
fi
# look in incoming dir
new_files=$(ls -1 -rt ${folderA}/*)
while read line
do
# launch upload
if [ Z$line == 'Z' ]; then
break
fi
#upload ${folderA}/$line
upload $line
if [ $? == 0 ]; then
logmsg File $line Uploaded from ${folderA}
else
# upload failed we cp to retry folder
echo upload failed
cp $line $retrydir
fi
# don't care upload successfull or failed, we ALWAYS move the file to folderB
mv $line $folderB
logmsg File $line Copyed from ${folderA}
done <<< "$new_files"
# clean exit
rm -f $PIDFILE
exit 0
below is content of ftpmon.path:
[Unit]
Description= Triggers the service that logs changes.
Documentation= man:systemd.path
[Path]
# Enter the path to monitor (/u02/dump)
PathModified=/u02/dump/
[Install]
WantedBy=multi-user.target
below is content of ftpmon.service:
[Unit]
Description= Starts File Upload monitoring
Documentation= man:systemd.service
[Service]
Type=oneshot
#Set here the user that ftpmxml.sh will run as
User=root
#Set the exact path to the script
ExecStart=/u02/ftpxml/ftpxml.sh
[Install]
WantedBy=default.target
Thanks in advance, hope any experts can give me some suggestion.
As you remove the successfull transfered files from A you can leave files with transfer errors in A. So I am dealing only with files in one folder.
List your files by creation time with
find -type f -maxdepth 1 -print0 | xargs -r0 stat -c %y\ %n | sort
if you want hidden files to be included or - if not -
find -type f -maxdepth 0 -print0 | xargs -r0 stat -c %y\ %n | sort
You'll get something like
2016-02-19 18:53:41.000000000 ./.dockerenv
2016-02-19 18:53:41.000000000 ./.dockerinit
2016-02-19 18:56:09.000000000 ./versions.txt
2016-02-19 19:01:44.000000000 ./test.sh
Now cut the filenames (or use xargs -r0 stat -c %n if it does not matter that the files are order by name instead the timestamp) and
do the transfer
check the success
move successfully transfered files to B
As you stated above, there are situations where newly stored files are not successfully transfered. This may be if the file is written further after you started the transfer. So filter the timestamp to be at least some time old. Add -mmin -1 to the find statement for "at least one minute old"
find -type f -maxdepth 0 -mmin -1 -print0 | xargs -r0 stat -c %n | sort
If you don't want to use a minute file age you'll have to check if the file is still open: lsof | grep ./testfile but this may have issues if you have tmpfs in your file system.
lsof | grep ./testfile
lsof: WARNING: can't stat() tmpfs file system /var/lib/docker/containers/8596cd310292a54652c7f50d7315c8390703b4816442146b340946779a72a40c/shm
Output information may be incomplete.
lsof: WARNING: can't stat() proc file system /run/docker/netns/fb9323486c44
Output information may be incomplete.
So add %s to the stats statement to check the file size twice within some seconds and if it's constant the file may be written complete. May, as the write process may be stalled.

Bash: Check if remote directory exists using FTP

I'm writing a bash script to send files from a linux server to a remote Windows FTP server.
I would like to check using FTP if the folder where the file will be stored exists before attempting to create it.
Please note that I cannot use SSH nor SCP and I cannot install new scripts on the linux server. Also, for performance issues, I would prefer if checking and creating the folders is done using only one FTP connection.
Here's the function to send the file:
sendFile() {
ftp -n $FTP_HOST <<! >> ${LOCAL_LOG}
quote USER ${FTP_USER}
quote PASS ${FTP_PASS}
binary
$(ftp_mkdir_loop "$FTP_PATH")
put ${FILE_PATH} ${FTP_PATH}/${FILENAME}
bye
!
}
And here's what ftp_mkdir_loop looks like:
ftp_mkdir_loop() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]]; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
The ftp_mkdir_loop function helps in creating all the folders in $FTP_PATH (Since I cannot do mkdir -p $FTP_PATH through FTP).
Overall my script works but is not "clean"; this is what I'm getting in my log file after the execution of the script (yes, $FTP_PATH is composed of 5 existing directories):
(directory-name) Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
To solve this, do as follows:
To ensure that you only use one FTP connection, you create the input (FTP commands) as an output of a shell script
E.g.
$ cat a.sh
cd /home/test1
mkdir /home/test1/test2
$ ./a.sh | ftp $Your_login_and_server > /your/log 2>&1
To allow the FTP to test if a directory exists, you use the fact that "DIR" command has an option to write to file
# ...continuing a.sh
# In a loop, $CURRENT_DIR is the next subdirectory to check-or-create
echo "DIR $CURRENT_DIR $local_output_file"
sleep 5 # to leave time for the file to be created
if (! -s $local_output_file)
then
echo "mkdir $CURRENT_DIR"
endif
Please note that "-s" test is not necessarily correct - I don't have acccess to ftp now and don't know what the exact output of running DIR on non-existing directory will be - cold be empty file, could be a specific error. If error, you can grep the error text in $local_output_file
Now, wrap the step #2 into a loop over your individual subdirectories in a.sh
#!/bin/bash
FTP_HOST=prep.ai.mit.edu
FTP_USER=anonymous
FTP_PASS=foobar#example.com
DIRECTORY=/foo # /foo does not exist, /pub exists
LOCAL_LOG=/tmp/foo.log
ERROR="Failed to change directory"
ftp -n $FTP_HOST << EOF | tee -a ${LOCAL_LOG} | grep -q "${ERROR}"
quote USER ${FTP_USER}
quote pass ${FTP_PASS}
cd ${DIRECTORY}
EOF
if [[ "${PIPESTATUS[2]}" -eq 1 ]]; then
echo ${DIRECTORY} exists
else
echo ${DIRECTORY} does not exist
fi
Output:
/foo does not exist
If you want to suppress only the messages in ${LOCAL_LOG}:
ftp -n $FTP_HOST <<! | grep -v "Cannot create a file" >> ${LOCAL_LOG}

Printing shell script doesn't print anything

I'm trying to create a print service von my raspberry pi. The idea is to have a pop3 account for print jobs where I can sent PDF files and get them printed at home. Therefore I set up fetchmail & rarr; procmail & rarr; uudeview to collect the emails (using a whitelist), extract the documents and save them to /home/pi/attachments/. Up to this point everything is working.
To get the files printed I wanted to set up a shell script which I planned to execute via a cronjob every minute. That's where I'm stuck now since I get "permission denied" messages and nothing gets printed at all with the script while it works when executing the commands manually.
This is what my script looks like:
#!/bin/bash
fetchmail # gets the emails, extracts the PDFs to ~/attachments
wait $! # takes some time so I have to wait for it to finish
FILES=/home/pi/attachments/*
for f in $FILES; do # go through all files in the directory
if $f == "*.pdf" # print them if they're PDFs
then
lpr -P ColorLaserJet1525 $f
fi
sudo rm $f # delete the files
done;
sudo rm /var/mail/pi # delete emails
After the script is executed I get the following Feedback:
1 message for print#MYDOMAIN.TLD at pop3.MYDOMAIN.TLD (32139 octets).
Loaded from /tmp/uudk7XsG: 'Test 2' (Test): Stage2.pdf part 1 Base64
Opened file /tmp/uudk7XsG
procmail: Lock failure on "/var/mail/pi.lock"
reading message print#MYDOMAIN.TLD#SERVER.HOSTER.TLD:1 of 1 (32139 octets) flushed
mail2print.sh: 6: mail2print.sh: /home/pi/attachments/Stage2.pdf: Permission denied
The email is fetched from the pop3 account, the attachement is extracted and appears for a short moment in ~/attachements/ and then gets deleted. But there's no printout.
Any ideas what I'm doing wrong?
if $f == "*.pdf"
should be
if [[ $f == *.pdf ]]
Also I think
FILES=/home/pi/attachments/*
should be quoted:
FILES='/home/pi/attachments/*'
Suggestion:
#!/bin/bash
fetchmail # gets the emails, extracts the PDFs to ~/attachments
wait "$!" # takes some time so I have to wait for it to finish
shopt -s nullglob # don't present pattern if no files are matched
FILES=(/home/pi/attachments/*)
for f in "${FILES[#]}"; do # go through all files in the directory
[[ $f == *.pdf ]] && lpr -P ColorLaserJet1525 "$f" # print them if they're PDFs
done
sudo rm -- "${FILES[#]}" /var/mail/pi # delete files and emails at once
Use below to filter the pdf files in the first place and then you can remove that if statement inside the for loop.
FILES="ls /home/pi/attachments/*.pdf"

Resources