Adding printers by shell script; works in terminal but not as .command - bash

I am trying to provide a clickable .command to set up printers in Macs for my workplace. I thought since it is something I do very frequently, I can write a shellscript for each printer and save it on a shared server. Then, when I need to add a printer for someone, I can just find the shell script on the server and execute it. My current command works in terminal, but once executed as a .command, it comes up with the errors.
This is my script:
#!/bin/sh
lpadmin -p ‘PRINTERNAME’ -D PRINTER\ NAME -L ‘OFFICE’ -v lpd://xx.xx.xx.xx -P /Library/Printers/PPDs/Contents/Resources/Xerox\ WorkCentre\ 7855.gz -o printer-is-shared=false -E​
I get this error after running the script:
lpadmin: Unknown option “?”.
I find this strange, because there is no "?" in the script.

I have a idea, why not try it like this ? there are huge differences between sh shells, so let me know if it rocks, I have more ideas.
#!/bin/sh
PPD="PRINTERNAME"
INFO="PRINTER\ NAME"
LOC="OFFICE"
URI="lpd://xx.xx.xx.xx"
OP ="printer-is-shared=false"
# This parameter P is new to me. Is it the paper-name ?
P="/Library/Printers/PPDs/Contents/Resources/Xerox\ WorkCentre\ 7855.gz"
lpadmin -p "$PPD" -D "$INFO" -L "$LOC" -v "$URI" -P "$P" -o "$OP" -E;

Related

bash script to log as another user and keep the terminal open

I have set up a http server at localhost, with several sites. I would like to connect to each site root folder, at the same way I used to at a remote server via ssh. So, I tried to create a bash script, intended to log as user "http", giving the site root folder as argument and change the $HOME to the site root folder:
#!/bin/bash
echo "Connecting to $1 as http...";
read -p "Contraseña: " -s pass;
su - http << EOSU >/dev/null 2>&1
$pass
export HOME="/srv/http/$1";
echo $HOME;
source .bash_profile;
exec $SHELL;
EOFSU
It does not work, basically because of:
echo $HOME keeps giving out the home folder of the user launching the script.
when the script reaches the end, it ends (obvious), but I would like that it stays open, so I could have a terminal session for user "http" and go on typing commands.
In other words, I am looking for a script that saves me 3 commands:
# su - http
# cd <site_root_folder>
# export HOME=<site_root_folder>
Edit:
Someone suggested the following:
#!/bin/bash
init_commands=(
"export HOME=/srv/http/$(printf '%q' "$1")"
'cd $HOME'
'. .bash_profile'
)
su http -- --init-file <(printf '%s\n' "${init_commands[#]}")
I am sorry but their post is gone... In any case, this give me out bash: /dev/fd/63: permission denied. I am not so skillful to understand the commands above and try to sort it out. Can someone help me?
Thanks.
Possible solution:
I have been playing around, based on what was posted and some googling, and finally I got it :-)
trap 'rm -f "$TMP"' EXIT
TMP=$(mktemp) || exit 1
chmod a+r $TMP
cat >$TMP <<EOF
export HOME=/srv/http/$(printf '%q' "$1")
cd \$HOME
. .bash_profile
EOF
su http -- --init-file $TMP
I admit it is not a nice code, because of:
the temporary file is created by the user executing the script and later I have to chmod a+r so user "http" can access... not so good.
I am sure this can be done on the fly, without creating a tmp file.
If some can improve it, it will be welcome; although in any case, it works!
Your main problem is that the $HOME is evaluated as when the user run the script, meaning that it will get his evaluation of $HOME instead of evaluating it as the given user.
You can evaluate the $HOME as the given user (using the eval command) but I wont recommend it, it is generally bad practice to use this kind of evaluation, especially when talking about security.
I can recommend you to get the specific user home directory with standard linux tools like passwd
Example of the behavior you are seeing:
# expected output is /home/eliott
$ sudo -u eliott echo $HOME
/root
Working it around with passwd:
$ sudo -u eliott echo $(getent passwd eliott | cut -d: -f6)
/home/eliott

In this simple Docker wrapper script example, how may one correctly pass a current working directory path which contains spaces?

My Docker wrapper script works as intended when the current working directory does not contain spaces, however there is a bug when it does.
I have simplified an example to make use of the smallest official Docker image I could find and a well known GNU core utility. Of course this example is not very useful. In my real world use case, a much more complicated environment is packaged.
Docker Wrapper Script:
#!/usr/bin/env bash
##
## Dockerized ls
##
set -eux
# Only allocate tty if one is detected
# See https://stackoverflow.com/questions/911168/how-to-detect-if-my-shell-script-is-running-through-a-pipe
if [[ -t 0 ]]; then
DOCKER_RUN_OPTIONS+="-i "
fi
if [[ -t 1 ]]; then
DOCKER_RUN_OPTIONS+="-t "
fi
WORK_DIR="$(realpath .)"
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source=${WORK_DIR},target=${WORK_DIR}"
exec docker run ${DOCKER_RUN_OPTIONS} busybox:latest ls "$#"
You can save this somewhere as /tmp/docker_ls for example. Remember to chmod +x /tmp/docker_ls
Now you are able to use this Dockerized ls in any path which contains no spaces as follows:
/tmp/docker_ls -lah
/tmp/docker_ls -lah | grep 'r'
Note that /tmp/docker_ls -lah /path/to/something is not implemented. The wrapper script would have to be adapted to parse parameters and mount the path argument into the container.
Can you see why this would not work when current working directory path contains spaces? What can be done to rectify it?
Solution:
#david-maze's answer solved the problem. Please see: https://stackoverflow.com/a/55763212/1782641
Using his advice I refactored my script as follows:
#!/usr/bin/env bash
##
## Dockerized ls
##
set -eux
# Only allocate tty if one is detected. See - https://stackoverflow.com/questions/911168
if [[ -t 0 ]]; then IT+=(-i); fi
if [[ -t 1 ]]; then IT+=(-t); fi
USER="$(id -u $(logname)):$(id -g $(logname))"
WORKDIR="$(realpath .)"
MOUNT="type=bind,source=${WORKDIR},target=${WORKDIR}"
exec docker run --rm "${IT[#]}" --user "${USER}" --workdir "${WORKDIR}" --mount "${MOUNT}" busybox:latest ls "$#"
If your goal is to run a process on the current host directory as the current host user, you will find it vastly easier and safer to use a host process, and not an isolation layer like Docker that intentionally tries to hide these things from you. For what you’re showing I would just skip Docker and run
#!/bin/sh
ls "$#"
Most software is fairly straightforward to install without Docker, either using a package manager like APT or filesystem-level isolation like Python’s virtual environments and Node’s node_modules directory. If you’re writing this script then Docker is just getting in your way.
In a portable shell script there’s no way to make “a list of words” in a way that keeps their individual wordiness. If you know you’ll always want to pass some troublesome options then this is still fairly straightforward: include them directly in the docker run command and don’t try to create a variable of options.
#!/bin/sh
RM_IT="--rm"
if [[ -t 0 ]]; then RM_IT="$RM_IT -i"; fi
if [[ -t 1 ]]; then RM_IT="$RM_IT -t"; fi
UID=$(id -u $(logname))
GID=$(id -g $(logname))
# We want the --rm -it options to be expanded into separate
# words; we want the volume options to stay as a single word
docker run $RM_IT "-u$UID:$GID" "-w$PWD" "-v$PWD:$PWD" \
busybox \
ls "$#"
Some shells like ksh, bash, and zsh have array types, but these shells may not be present on every system or environment (your busybox image doesn’t have any of these for example). You also might consider picking a higher-level scripting language that can more explicitly pass words into an exec type call.
I'm taking a stab at this to give you something to try:
Change this:
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source=${WORK_DIR},target=${WORK_DIR}"
To this:
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source='${WORK_DIR}',target='${WORK_DIR}'"
Essentially, we are putting the ' in there to escape the space when the $DOCKER_RUN_OPTIONS variable is evaluated by bash on the 'exec docker' command.
I haven't tried this - it's just a hunch / first shot.

Running programs only if they are installed, and ignoring them otherwise

When writing shell scripts, is the an idiom or swift way to run a program only if it is installed, and if it is not, just let it be (or handle the error in some other way apart from installing it)?
More specifically, I have a lot of servers which I access over ssh, and whenever I get a new server, I simply copy all my rc-files to it. The .zshrc starts tmux unless it is already running. Some of the servers (not all) do not have tmux installed. I do not want to install it because of disk space limitations, I do not want to have different rc-files for different servers, and I do not want my rc-files to be interrupted when executing them.
I have seen solutions involving apt-cache policy <package-name>, so I guess I could use that and pipe it to something like grep -e 'Installed: (none)', but that would assume that the server is running Debian or Ubuntu, which I can not do, and it would only work for packages that were installed with apt, not things I have installed in other ways.
command -v <command> is the common (and POSIX) way to check if a command could be executed (is executable and on the $PATH).
E.g:
command -v tmux >/dev/null &&
tmux a -t name
(>/dev/null since, if the command exists, its path will be printed to STDOUT.)
It could be nice to put it in a reusable function:
maybe() {
! command -v "${1}" >/dev/null ||
"$#"
}
Then one could use:
maybe tmux a -t name
And if tmux is available then tmux a -t name will be run, otherwise it’ll be silently ignored.
Or, if you want some feedback when a command is not available:
maybe() {
if command -v "${1}" >/dev/null
then
"$#"
else
printf 'Command "%s" not available, skipping\n' "${1}" >&2
fi
}
This might help-
1) Assuming tmux is available in PATH (as it must be executable)
isAvailable=$(type -P tmux)
if [[ -x $isAvailable ]]; then
...
2) Verify file is present on a specific path (Copying all rc-files)
export FILEPATH="..."
if[[ -f $FILEPATH ]]; then

No display of variable data on mail output - Shell scripting

I've scheduled a task in a UNIX environment, which sends a report of services running/stopped using Shell scripting. Here is the code for same;
#!/bin/bash
echo -e "\t\tServer daily monitoring report\n">/home/user/MailLog.txt
echo -e "\t\t`date "+%Y-%m-%d %H:%M:%S"`\n">>/home/user/MailLog.txt
sudo bash /home/user/commands.sh>>/home/user/MailLog.txt
echo >>/home/user/MailLog.txt
cat /home/user/MailLog.txt>>/home/user/StatusLog.txt
rn=`grep -c "running" MailLog.txt`
sp=`grep -c "stopped" MailLog.txt`
echo -e "Server status report\n\nServices running:\t $rn \nServices
stopped:\t $sp "|mailx -v -s "Services report." -a /home/user/MailLog.txt
useremail1#domain.com,useremail2#domain.com
#echo $run $stp
#rm /home/user/MailLog.txt
As per scheduled task, I receive the mail and attachment alright. But I get a blank in front of 'Services running: ' and 'Services stopped: '.
When I manually run the script, I get the proper output (numbers + attachment).
Please tell me what I'm doing wrong.
Replace MailLog.txt by /home/user/MailLog.txt in both grep commands. It's very likely that you usually manually run the commands from the /home/user/ directory but the script's working directory isn't /home/user, which makes the relative path MailLog.txt point to an inexistant file.
rn=$(grep -c "running" /home/user/MailLog.txt)
sp=$(grep -c "stopped" /home/user/MailLog.txt)
Better yet, set the file path in a variable and reuse that one each time you want to refer to the file :
work_file="/home/user/MailLog.txt"
#[...]
rn=$(grep -c "running" "$work_file")
sp=$(grep -c "stopped" "$work_file")
Note that your code could be improved in many other ways, I suggest you validate it with shellcheck (you can ignore the sudo+redirect warning since your user has write permissions to the MailLog.txt file).

OSX bash script works but fails in crontab on SFTP

this topic has been discussed at length, however, I have a variant on the theme that I just cannot crack. Two days into this now and decided to ping the community. THx in advance for reading..
Exec. summary is I have a script in OS X that runs fine and executes without issue or error when done manually. When I put the script in the crontab to run daily it still runs but it doesnt run all of the commands (specifically SFTP).
I have read enough posts to go down the path of environment issues, so as you will see below, I hard referenced the location of the SFTP in the event of a PATH issue...
The only thing that I can think of is the IdentityFile. NOTE: I am putting this in the crontab for my user not root. So I understand that it should pickup on the id_dsa.pub that I have created (and that has already been shared with the server)..
I am not trying to do any funky expect commands to bypass the password, etc. I dont know why when run from the cron that it is skipping the SFTP line.
please see the code below.. and help is greatly appreciated.. thx
#!/bin/bash
export DATE=`date +%y%m%d%H%M%S`
export YYMMDD=`date +%y%m%d`
PDATE=$DATE
YDATE=$YYMMDD
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED="~/Dropbox/"
USER="user"
HOST="host.domain.tld"
A="/tmp/5nPR45bH"
>${A}.file1${PDATE}
>${A}.file2${PDATE}
BYEbye ()
{
rm ${A}.file1${PDATE}
rm ${A}.file2${PDATE}
echo "Finished cleaning internal logs"
exit 0
}
echo "get -r *" >> ${A}.file1${PDATE}
echo "quit" >> ${A}.file1${PDATE}
eval mkdir ${FEED}${YDATE}
eval cd ${FEED}${YDATE}
eval /usr/bin/sftp -b ${A}.file1${PDATE} ${USER}#${HOST}
BYEbye
exit 0
Not an answer, just comments about your code.
The way to handle filenames with spaces is to quote the variable: "$var" -- eval is not the way to go. Get into the habit of quoting all variables unless you specifically want to use the side effects of not quoting.
you don't need to export your variables unless there's a command you call that expects to see them in the environment.
you don't need to call date twice because the YYMMDD value is a substring of the DATE: YYMMDD="${DATE:0:6}"
just a preference: I use $HOME over ~ in a script.
you never use the "file2" temp file -- why do you create it?
since your sftp batch file is pretty simple, you don't really need a file for it:
printf "%s\n" "get -r *" "quit" | sftp -b - "$USER#$HOST"
Here's a rewrite, shortened considerably:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED_DIR="$HOME/Dropbox/$(date +%Y%m%d)"
USER="user"
HOST="host.domain.tld"
mkdir "$FEED_DIR" || { echo "could not mkdir $FEED_DIR"; exit 1; }
cd "$FEED_DIR"
{
echo "get -r *"
echo quit
} |
sftp -b - "${USER}#${HOST}"

Resources