Printing shell script doesn't print anything - bash

I'm trying to create a print service von my raspberry pi. The idea is to have a pop3 account for print jobs where I can sent PDF files and get them printed at home. Therefore I set up fetchmail & rarr; procmail & rarr; uudeview to collect the emails (using a whitelist), extract the documents and save them to /home/pi/attachments/. Up to this point everything is working.
To get the files printed I wanted to set up a shell script which I planned to execute via a cronjob every minute. That's where I'm stuck now since I get "permission denied" messages and nothing gets printed at all with the script while it works when executing the commands manually.
This is what my script looks like:
#!/bin/bash
fetchmail # gets the emails, extracts the PDFs to ~/attachments
wait $! # takes some time so I have to wait for it to finish
FILES=/home/pi/attachments/*
for f in $FILES; do # go through all files in the directory
if $f == "*.pdf" # print them if they're PDFs
then
lpr -P ColorLaserJet1525 $f
fi
sudo rm $f # delete the files
done;
sudo rm /var/mail/pi # delete emails
After the script is executed I get the following Feedback:
1 message for print#MYDOMAIN.TLD at pop3.MYDOMAIN.TLD (32139 octets).
Loaded from /tmp/uudk7XsG: 'Test 2' (Test): Stage2.pdf part 1 Base64
Opened file /tmp/uudk7XsG
procmail: Lock failure on "/var/mail/pi.lock"
reading message print#MYDOMAIN.TLD#SERVER.HOSTER.TLD:1 of 1 (32139 octets) flushed
mail2print.sh: 6: mail2print.sh: /home/pi/attachments/Stage2.pdf: Permission denied
The email is fetched from the pop3 account, the attachement is extracted and appears for a short moment in ~/attachements/ and then gets deleted. But there's no printout.
Any ideas what I'm doing wrong?

if $f == "*.pdf"
should be
if [[ $f == *.pdf ]]
Also I think
FILES=/home/pi/attachments/*
should be quoted:
FILES='/home/pi/attachments/*'
Suggestion:
#!/bin/bash
fetchmail # gets the emails, extracts the PDFs to ~/attachments
wait "$!" # takes some time so I have to wait for it to finish
shopt -s nullglob # don't present pattern if no files are matched
FILES=(/home/pi/attachments/*)
for f in "${FILES[#]}"; do # go through all files in the directory
[[ $f == *.pdf ]] && lpr -P ColorLaserJet1525 "$f" # print them if they're PDFs
done
sudo rm -- "${FILES[#]}" /var/mail/pi # delete files and emails at once

Use below to filter the pdf files in the first place and then you can remove that if statement inside the for loop.
FILES="ls /home/pi/attachments/*.pdf"

Related

Unix script - Comparing number of filename date with my single input date

I am new to Unix scripting, I am trying to create Unix script since one week but I couldn't. Please help me in this.
I have a number of different files more than 100 (all the filenames are different) which the filename contains the date string(ex: 20171101)in the directory. I want compare these filename dates with my input date (today - 10days =20171114),with the files in the directories only using filename string date if it is less than with my input date then I have to delete the file. could anyone please help on this. Thanks
My script:
ten_days_ago=$(date -d "10 days ago" +%Y%m%d)
cd "$destination_dir" ;
ls *.* | awk -F '-' '{print $2}'
ls *.* | awk -F '-' '{print $2}' > removal.txt
while read filedate
do
if [ "$filedate" -lt "$ten_days_ago" ] ; then
cd "$destination_dir" ;
rm *-"$filedate"*
echo "deletion done"
fi
done <removal.txt
this script is working fine. but I need to send a email as well - if the deletion has been done then -one pass email else fail email.
but here within while loop if I am writing the emails then that will iterate
You're probably trying to pipe to mail from the middle of your loop. (Your question should really show this code, otherwise we can't say what's wrong.) A common technique is to redirect the loop's output to a file, and then send that. (Using a temporary file is slightly ugly, but avoids sending an empty message when there is no output from the loop.)
Just loop over the files and decide which to remove.
#!/bin/bash
t=$(mktemp -t tendays.XXXXXXXX) || exit
# Remove temp file if interrupted, or when done
trap 'rm -f "$t"' EXIT HUP INT TERM
ten_days_ago=$(date -d "10 days ago" +%Y%m%d)
for file in *-[1-9]*[1-9]-*; do
date=${file#*-} # strip prefix up through first dash
date=${date%-*} # strip from last dash from the previous result
if [ "$date" -lt "$ten_days_ago" ]; then
rm -v "$file"
fi
done >"$t" 2>&1
test -s "$t" || exit # Quit if empty
mail -s "Removed files" recipient#example.net <"$t"
I removed the (repeated!) cd so this can be run in any directory -- just switch to the directory you want before running the script. This also makes it easier to test in a directory with a set of temporary files.
Collecting the script's standard error also means the mail message will contain any error messages if rm fails for some reason or you have other exceptions.
By the by you should basically never use ls in scripts.

Shell Scripting: Calling mailx in a loop

I am trying to write a script using which I need to send multiple emails with a file as attachment per email. This is because of mail attachment size limitations.
I have zip files in a directory and they are file01.zip, file02.zip etc. and there will be about 4-5 of these files.
-- File count is normally passed in
numFiles=5
fileCounter=1
datestr="`date +"%m/%d/%Y"`"
while [ $fileCounter -le $numFiles ]
do
SUBJECT_LINE="Weekly files ($fileCounter of $numFiles) - $datestr"
echo "[`date`] E-mailing file ($fileCounter of $numFiles) ... "
ZIPFILE="file0$fileCounter.zip"
echo $ZIPFILE
ls -ltr $ZIPFILE
mailx -a "$ZIPFILE" \
-r no-reply#host.com \
-s "$SUBJECT_LINE" \
$TO_LIST < /dev/null
echo "[`date`] Done"
fileCounter=$(( $fileCounter + 1 ))
done
I am trying to call mailx in a loop as you can see. I tried the following as well
for file in file0*.zip
do
...
done
I am able to see the ZIPFILE names when I print them out using echo but the mailx command in the loop returns the following although the files are there:
No such file or directory
I can run the same mailx command from console and have the e-mail sent out. I can also send one e-mail without a loop, but doing so inside a loop seems to cause an issue. Am I missing something?
I likely had one or more characters not visible to the eye in the file name ($ZIPFILE) being passed in as attachment to mailx. I typed parts of the script today again while troubleshooting and that fixed the issue. But the script above is good.

Create a detailed self tracing log in bash

I know you can create a log of the output by typing in script nameOfLog.txt and exit in terminal before and after running the script, but I want to write it in the actual script so it creates a log automatically. There is a problem I'm having with the exec >>log_file 2>&1 line:
The code redirects the output to a log file and a user can no longer interact with it. How can I create a log where it just basically copies what is in the output?
And, is it possible to have it also automatically record the process of files that were copied? For example, if a file at /home/user/Deskop/file.sh was copied to /home/bckup, is it possible to have that printed in the log too or will I have to write that manually?
Is it also possible to record the amount of time it took to run the whole process and count the number of files and directories that were processed or am I going to have to write that manually too?
My future self appreciates all the help!
Here is my whole code:
#!/bin/bash
collect()
{
find "$directory" -name "*.sh" -print0 | xargs -0 cp -t ~/bckup #xargs handles files names with spaces. Also gives error of "cp: will not overwrite just-created" even if file didn't exist previously
}
echo "Starting log"
exec >>log_file 2>&1
timelimit=10
echo "Please enter the directory that you would like to collect.
If no input in 10 secs, default of /home will be selected"
read -t $timelimit directory
if [ ! -z "$directory" ] #if directory doesn't have a length of 0
then
echo -e "\nYou want to copy $directory." #-e is so the \n will work and it won't show up as part of the string
else
directory=/home/
echo "Time's up. Backup will be in $directory"
fi
if [ ! -d ~/bckup ]
then
echo "Directory does not exist, creating now"
mkdir ~/bckup
fi
collect
echo "Finished collecting"
exit 0
To answer the "how to just copy the output" question: use a program called tee and then a bit of exec magic explained here:
redirect COPY of stdout to log file from within bash script itself
Regarding the analytics (time needed, files accessed, etc) -- this is a bit harder. Some programs that can help you are time(1):
time - run programs and summarize system resource usage
and strace(1):
strace - trace system calls and signals
Check the man pages for more info. If you have control over the script it will be probably easier to do the logging yourself instead of parsing strace output.

shell script to create folder daily with time-stamp and push time-stamp generated logs

I have a cron job which runs every 30 minutes to generate log files with time-stamp like this:
test20130215100531.log,
test20130215102031.log
I would like to create one folder daily with date time-stamp and push log files in to respective date folder when generated.
I need to achieve this on AIX server with bash.
Maybe you are looking for a script like this:
#!/bin/bash
shopt -s nullglob # This line is so that it does not complain when no logfiles are found
for filename in test*.log; do # Files considered are the ones starting with test and ending in .log
foldername=$(echo "$filename" | awk '{print (substr($0, 5, 8));}'); # The foldername is characters 5 to 13 from the filename (if they exist)
mkdir -p "$foldername" # -p so that we don't get "folder exists" warning
mv "$filename" "$foldername"
echo "$filename $foldername" ;
done
I only tested with your sample, so do a proper testing before using in a directory that contains important stuff.
Edit in response to comments:
Change your original script to this:
foldername=$(date +%Y%m%d)
mkdir -p /home/app/logs/"$foldername"
sh sample.sh > /home/app/logs/"$foldername"/test$(date +%Y%m%d%H%M%S).log
Or if the directory is created somewhere else, just do this:
sh sample.sh > /home/app/logs/$(date +%Y%m%d)/test$(date +%Y%m%d%H%M%S).log
You should use logrotate! It can do this for you already, and you can just write to the same log file.
Check their man pages for info:
http://linuxcommand.org/man_pages/logrotate8.html

BASH multiple nested IFs inside DO

The script I am trying to write needs to do the following
Go to specific directory and find all files with the *.mp4 extension that is newer than 30 minutes and move them into a variable.
All files within the variable needs to be uploaded to an FTP and an email notification goes out. The next if will need to look through the files and find all files with the extension of *1280x720_3500_h32.mp4 these files will then be uploaded to a different FTP server.
Lastly the script needs to move all files to a separate location on the network for archiving.
The process then needs to run every 30 minutes. Which is why I have added a separate script that runs the 30 minute sleep process and calls the initial script. So in effect I have a Upload script and a sleep script. So two questions springs to mind:
I am not sure about the 2nd IF where the script is looking for all the *1280x720_3500_h32.mp4 files - will it upload all files with this file name that is newer than 30 minutes
Race conditions where the script will try to upload files that have already been processed or will skip files?
Sleep script:
#!/bin/bash
ScripLocal='/Users/jonathan/scripts/Event_Scripts'
while true
do
echo "running Upload script: find_newerthan_mv_toFtp_v2.sh"
cd $ScripLocal
. find_newerthan_mv_toFtp_v2.sh
echo "finished running script: ffind_newerthan_mv_toFtp_v2.sh - next run is in 30 minutes"
sleep 30m
done
Upload Script:
#!/bin/bash
#File Origin
GSPORIGIN='/Volumes/HCHUB/Episode/Watchfolders/Output/Profile_01/'
#To location on the SAN where the files are being stored
DESTDIRSAN='/Volumes/HCHUB/Digitalmedia/online_encodes'
# 1st FTP Details
HOST='ftp.upload01.com'
USER='xxxx'
PASSWD='xxxx'
DESTDIR="/8619/_!/intlod/upload/"
#2nd FTP details
HOST01='ftp.upload02.com'
USER01='xxxx'
PASSWd01='xxxx'
cd $GSPORIGIN
for file in `find . -type f -name "*.mp4" -mmin -30`
do
echo $file
if [ -f $file ] ; then
echo "Uploading to FTP 01"
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd $DESTDIR
mput $file
EOT
echo "$file has been copied to FTP 01" | mail -s "$file has been copied to FTP 01 in Directory $DESTDIR" xxxx#xxxx.com xxxx#xxxx.com xxxx#xxxx.com;
if [[ $file == *1280x720_3500_h32.mp4 ]] ; then
echo "Uploading FTP 02 version"
ftp -n -v $HOST01 << EOT
ascii
user $USER01 $PASSWD01
prompt
mput $file
EOT
echo "$file has been copied to FTP 02" | mail -s "$file has been copied to FTP 02" xxxx#xxxx.com;
fi
echo "moving files to SAN $DESTDIRSAN"
mv -v $file $DESTDIRSAN
else exit 1
fi
done
If you're worried about race conditions, don't use FTP. FTP is a very, very bad protocol for data exchange - it doesn't give you proper error messages when something goes wrong, the exit code is always 0 (even when something didn't work), there is no protection against duplicate or missed uploads, etc.
Use rsync instead. rsync will start writing the target file under a temporary name (so the receiver won't see it until the upload is complete), it can resume an upload (important for big files), it uses checksums to make sure the upload is correct and it fails with a non-null error code if anything is wrong.
On top of that, it can use ssh to secure the connection so you don't have to send plain text passwords over the network (which is a very bad idea today even in an Intranet).
rsync has powerful filtering options (what to upload) and can it can archive any processed file so you won't upload something twice. It can even detect when a user uploads a file a second time and fail (or silently skip if the file is 100% identical).
To make sure that you don't run two uploads at the same time, use a lock:
mkdir .lock || { echo "Can't lock - process is already running" ; exit 1 ; }
trap "rmdir .lock" EXIT
You might also want to have a look at FTAPI which does all this and more.
Use crontab, not sleeping threads. sleep is prone to failure, and nohupping is a hassle anyway.
Use rsync instead of looking for changes yourself, if you at all can. Also, you don't need to script rsync like you do with ftp.
Got the python variation of the script working (thank you Daniel!!).
#!/usr/bin/python
# Created in a rush, on the 31st October
# HALLOWEEN SPECIAL - QUICK AND DIRTY
import os
import shutil
from ftplib import FTP
# File Origin
ENCODE_OUTPUT='xxxxxxx'
# To location on the SAN where the files are being cached on the network
CACHE_DIR='xxxxxxxx'
#ENCODE_OUTPUT='encode/'
#CACHE_DIR='cache/'
#FTP01 FTP Details
FTP01_HOST='upload.FTP01.com'
FTP01_USER='xxxxx'
FTP01_PASSWD='xxxxxxx'
FTP01_DESTDIR="dir/dir/"
#Fingerprinting FTP details
FINGERPRINT_HOST='ftp.ftp02.com'
FINGERPRINT_USER='xxxxx'
FINGERPRINT_PASSWD='xxxxxx'
FINGERPRINT_DEST_DIR='/'
dirList = os.listdir(ENCODE_OUTPUT)
dirList.sort()
# Move the files out of the encoded located and into the cache dir
for video_file in dirList:
print "Moving file %s to %s" % (video_file, CACHE_DIR)
shutil.move(ENCODE_OUTPUT + video_file, CACHE_DIR)
# Upload the file to FTP01
print "Logging into %s FTP" % FTP01_HOST
ftp = FTP(FTP01_HOST)
ftp.login(FTP01_USER, FTP01_PASSWD)
ftp.cwd(FTP01_DESTDIR)
print "Uploading file %s to %s" % (video_file, FTP01_HOST+FTP01_DESTDIR)
ftp.storbinary('STOR ' + FTP01_DESTDIR+video_file, open(CACHE_DIR+video_file, "rb"), 1024)
# Close the connection
ftp.close
# If the file ands in "1280x720_3500_h32.mp4" copy it to the finger printing service
if video_file[-21:]=="1280x720_3500_h32.mp4":
# Upload the file to the fingerprinting service
print "File is HD. Logging into %s FTP" % FINGERPRINT_HOST
ftp = FTP(FINGERPRINT_HOST)
ftp.login(FINGERPRINT_USER, FINGERPRINT_PASSWD)
ftp.cwd(FINGERPRINT_DEST_DIR)
print "Uploading file %s to %s" % (video_file, FINGERPRINT_HOST)
ftp.storbinary('STOR ' + FINGERPRINT_DEST_DIR+video_file, open(CACHE_DIR+video_file, "rb"), 1024)
ftp.close

Resources