Writing a shell script while using "Packages" to make a .pkg file - macos

I really need your help with this:
The thing is: I am trying to build my app into .pkg file, at the same time I want to integrate node.js into my .pkg installation file and it will be installed if the OS doesn't have nodejs.
When I try to write a script to judge whether the user has already installed the node, I was stuck by "return value of the external script". I try my script at the end with 'echo' 'return' 'exit' but still not work.enter image description here
Here is the screenshot of "Packages" when I try to insert the script..
And this is the script I wrote.`#!/bin/bash
OUTPUT="$(node -v)"
echo ${OUTPUT}
if [[ $OUTPUT = "" ]];
then
echo "1"
return 1
#no node
else
echo "0"
return 0
#node found
fi
`
Pls help me

This script will run the "node -v" command and send output (stderr and stdout) to /dev/null; nothing is displayed to user. The if condition checks if the command ran successfully and sets the exit status to 0 or 1 depending on the outcome.
#/bin/bash
main() {
node -v >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
return 0
else
return 1
fi
}
main

Related

Exit causes root logout if script user executes the script outside of the script directory

Bash Script Bug User Gets Logged Out Of Root On Exit
I have a big Bash script which requires to exit the script and restart it if some user input is invalid. I got a weird bug, where if the user does execute the script outside of the directory in which the script is located, he gets logged out of root. But if the script is executed inside the directory where the script is located, this doesn't occur.
I already tried removing the exit but that only makes things worse.
#!/bin/bash
some_function() {
read -p "Enter something: "
# Some commands
if [[ $? -gt 0 ]]; then
echo "error"
. /whatever/location/script.sh && exit 1
fi
}
The expected result is, that the script just restarts and exits the process the user ran. The actual result is just like that, but the user gets logged out of root if the script is terminated after that.
You did not say so, but it seems you are sourcing the script containing this function that exits. If you are sourcing it, then it is as if each command is typed at the command line... so exit will logout of whatever shell you are running.
For a script that is always sourced, use return instead of exit
If you don't know whether the script will be sourced or not, you'll need to detect it and choose the proper behavior based on how it was called. For example:
some_function() {
read -p "Enter something: "
# Some commands
if [[ $? -gt 0 ]]; then
echo "error"
if [[ "${BASH_SOURCE[0]}" != "${0}" ]]; then
# sourced
. /whatever/location/script.sh && return 1
else
# not sourced
. /whatever/location/script.sh && exit 1
fi
fi
}

Exit if in a script, do not exit if using a terminal/tty

If the user is entering commands in a terminal, I want to echo an error statement, but I do not want the terminal to close, so I have this:
if [[ "$fle" =~ [^a-zA-Z0-9] ]]; then
echo "quicklock: lockname has invalid chars - must be alpha-numeric chars only."
if [[ -t 1 ]]; then
# if we are in a terminal just return, do not exit.
return 1;
else
exit 1;
fi
fi
however the if [[ -t 1 ]]; then does not seem to work, the terminal window I am using just closes immediately, so I think exit 1 is being called.
The -t flag checks if any of the standard file descriptors are open, and specifically [ -t 1 ] will represent if the STDOUT is attached to tty, so when running from the terminal, it will always assert this condition as true.
Also the return keyword is applicable only when running a function to break out of it instead of terminating the shell itself. Your claim of terminal window closing because of hitting exit 1 when running from script, could happen only if you source the script, (i.e. in the same shell) and will not happen if you execute the script in a sub-shell.
You can use a construct for a no-action in scripts by just doing : in the if condition as
if [[ -t 1 ]]; then
# if we are in a terminal just return, do not exit.
:
Also -t is defined by POSIX because of which you can do just [ -t 1 ].
This is actually what ended up working for me:
function on_conditional_exit {
if [[ $- == *i* ]]; then
# if we are in a terminal just return, do not exit.
echo -e "quicklock: since we are in a terminal, not exiting.";
return 0;
fi
echo -e "quicklock: since we are not in a terminal, we are exiting...";
exit 1;
}
the test is to see if we are in terminal or in a script somewhere...if we are interactive, we are in a terminal..

Protect the program before turning it back on

My script is executed by Cron and every 2 min checks if xxx is running. If it is not in the process then the script will run it. The problem is that sometimes it runs it several times.
My problem is how to detect that the program is running several times?
How does bash detect that the pidof function returns several rather than one pid?
#!/bin/bash
PID=`pidof xxx`
if [ "$PID" = "" ];
then
cd
cd /home/pi
sudo ./xxx
echo "OK"
else
echo "program is running"
fi
You can use this script for doing the same. It will make sure script is executed once.
#!/bin/bash
ID=`ps -ef|grep scriptname|grep -v grep|wc -l`
if [ $ID -eq 0 ];
then
#run the script
else
echo "script is running"
fi

Is it logical to use the killall command to exit a script?

I am working around with a pin generator and I have come across a small issue.
I know of a few different methods to exiting a script but I have been playing around with calling the same script that is running as a child process, however when the child process is not called, the script exits perfectly. When called, the parent script does not exit properly after the child has completed and exited and the parent script loops back to the user input. I cannot think of anything other than possibly using the "wait" command though I don't know if this command would be proper with this code. Any thoughts on using the "killall" command to exit the script? I have tested it out, as you may see it in the code below, but I am left with the message, "Terminated" and if I can use killall how would I prevent that message from printing to standard out? Here is my code:
#!/bin/bash
clear
echo ""
echo "Now generating a random pin."
sleep 3
echo ""
echo "----------------------------------------------"
echo ""
# Generates a random 8-digit number
gen_num=$(tr -dc '0-9' </dev/urandom | head -c 8)
echo " Pin = $gen_num "
echo ""
echo "Pin has been generated!"
sleep 3
echo ""
clear
PS3="Would you like to generate another pin?: "
select CHOICE in "YES" "NO"
do
if [ "$CHOICE" == "YES" ]
then
bash "/home/yokai/Modules/Wps-options.sh"
elif [ "$CHOICE" == "NO" ]
then
clear
echo ""
echo "Okay bye!"
sleep 3
clear
killall "Wps-options.sh"
break
exit 0
fi
done
exit 0
You don't need to call the same script recursively (and then kill all its instances). The following script performs the task without forking:
#!/bin/bash
gen_pin () {
echo 'Now generating a random pin.'
# Generates a random 8-digit number
gen_num="$(tr -dc '0-9' </dev/urandom | head -c 8)"
echo "Pin = ${gen_num}"
PS3='Would you like to generate another pin?:'
select CHOICE in 'NO' 'YES'
do
case ${CHOICE} in
'NO')
echo 'OK'
exit 0;;
*)
break;;
esac
done
}
while true
do
gen_pin
done
You can find a lot of information about how to program in bash here.
First of all, when you execut
bash "/home/yokai/Modules/Wps-options.sh"
The script forks and crates a child process, then, it waits for the child termination, and it does not continue with execution, unless, your script Wps-options.sh executes something else in background (forking again) without reaping its child. But i can not tell you more because i dont know what is in your script Wps-options.sh
To prevent messages to be printed to stdout when you execute killall:
killall "Wps-options.sh" 1> /dev/null 2> /dev/null
1> stands for stdout redirection to file /dev/null and 2> stands for stderr redirection to file /dev/null

Locking executable for Multiple Users

I have an executable which is on a Server.
Many users can login to this server, using SSH and execute the binary.
I would like to have an exclusive access to each user executing this binary.
If possible, I would like to have a queue of users, which would be served one by one.
Is it possible to do this using bash script?
Suppose I call my binary as my.exec and the script as access.sh.
I can guarantee that each user will access my.exec only through access.sh.
So it will be easy to write a wrapper on top of this executable.
In addition the executable needs to be supplied variable arguments.
I have a partial solution here -
#!/bin/bash
# Locking
LOCK_FILE="/tmp/my.lock"
function create_lock_and_execute {
echo "Waiting for Lock"
while [ -f ${LOCK_FILE} ]
do
username=`ls -l ${LOCK_FILE} | cut -f 3 -d " "`
echo $username is using the lock.
sleep 1
done
(
# Wait for lock ${LOCK_FILE} (fd 200).
flock -x 200
echo "Lock Acquired..."
# Execute Command
echo "Executing Command : $COMMAND"
${COMMAND}
) 200>${LOCK_FILE}
}
function remove_lock {
rm -f ${LOCK_FILE}
}
# Exit trap function
function exit_on_error {
echo "Exiting on error..."
remove_lock
exit 1
}
# Exit on kill
function exit_on_kill {
echo "Killed, exiting..."
remove_lock
exit 1
}
# Exit normaly
function exit_on_end {
echo "Exiting..."
remove_lock
exit 0
}
trap exit_on_kill KILL
trap exit_on_error ERR
trap exit_on_end QUIT TERM EXIT INT
create_lock_and_execute
Thanks
check How to prevent a script from running simultaneously?. for example
Else you can manually check for a file, if that file does not exist, create it and go ahead. Then after everything is done, delete the file. But this is not truly secure. There can be concurrent attempt and the check might fail
Update:
I have a link to share, I saw it sometime back, took a little time to dig up. But I have not checked it. Seems fine on quick look, but please test properly if you use this
http://gotmynick.github.io/articles/2012/2012-05-02-file_locks_queues_and_bash.html

Resources