How do you add a .txt file to a shell script as a variable - bash

Hello guy I am trying to write a basic shell script that adds, creates or lists multiple user accounts from a provide list in the form of a file specified at the command line. I am very new to this and have been banging my head on the keyboard for the last few hours. below is an example of the syntax and the code so for. (I called this script buser)
./buser.sh -a userlist (-a is the option and userlist is the filename, it is only an example)
file=$(< `pwd`/$2)
while :
do
case $1 in
-a)
useradd -m "$file"
break
;;
--add)
useradd -m "$file"
break
;;
--delete)
userdel -r "$file"
break
;;
-d)
userdel -r "$file"
break
;;
-l)
cat /etc/passwd | grep "$file"
break
;;
--list)
cat /etc/passwd | grep "$file"
break
;;
esac
done
when the useradd command reads $file it reads all the names as a single line and I get an error.
any help would be greatly appreciated thank you.

Not sure if I understood correctly.
But assuming you have a file with the following content:
**file.txt**
name1
name2
name3
You would like to call buser.sh -a file.txt and run useradd to name1, name2 and name3? I'm also assuming you're using Linux and useradd is the native program, if so I suggest to read the man, because it does not support to add a list of user at once (https://www.tecmint.com/add-users-in-linux/)
You have to call useradd multiple times instead.
while read user;
do
useradd -m $user
done <$2

A few simplifications, plus an error handler if the option doesn't exist:
while read file ; do
case "$1" in
-a|--add)
useradd -m "$file"
;;
-d|--delete)
userdel -r "$file"
;;
-l|--list)
grep -f `pwd`/"$2" /etc/passwd
break
;;
*)
echo "no such option as '$1'..."
exit 2
;;
esac
done < `pwd`/"$2"
Note: the above logic is a bit redundant... case "$1" keeps doing the same test, (with the same result), every pass. OTOH, it works, and it's less code than a while loop in each command list.

You can use sed to create the commands, and eval to run them:
var=$( sed -e 's/^/useradd -m /' -e 's/$/;/' $file )
eval "$var"
(Edited to put in the -m flag.)

Related

Speed up shell script/Performance enhancement of shell script

Is there a way to speed up the below shell script? It's taking me a good 40 mins to update about 150000 files everyday. Sure, given the volume of files to create & update, this may be acceptable. I don't deny that. However, if there is a much more efficient way to write this or re-write the logic entirely, I'm open to it. Please I'm looking for some help
#!/bin/bash
DATA_FILE_SOURCE="<path_to_source_data/${1}"
DATA_FILE_DEST="<path_to_dest>"
for fname in $(ls -1 "${DATA_FILE_SOURCE}")
do
for line in $(cat "${DATA_FILE_SOURCE}"/"${fname}")
do
FILE_TO_WRITE_TO=$(echo "${line}" | awk -F',' '{print $1"."$2".daily.csv"}')
CONTENT_TO_WRITE=$(echo "${line}" | cut -d, -f3-)
if [[ ! -f "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}" ]]
then
echo "${CONTENT_TO_WRITE}" >> "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
else
if ! grep -Fxq "${CONTENT_TO_WRITE}" "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
then
sed -i "/${1}/d" "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
"${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
echo "${CONTENT_TO_WRITE}" >> "${DATA_FILE_DEST}"/"${FILE_TO_WRITE_TO}"
fi
fi
done
done
There are still parts of your published script that are unclear like the sed command. Although I rewrote it with saner practices and much less external calls witch should really speed it up.
#!/usr/bin/env sh
DATA_FILE_SOURCE="<path_to_source_data/$1"
DATA_FILE_DEST="<path_to_dest>"
for fname in "$DATA_FILE_SOURCE/"*; do
while IFS=, read -r a b content || [ "$a" ]; do
destfile="$DATA_FILE_DEST/$a.$b.daily.csv"
if grep -Fxq "$content" "$destfile"; then
sed -i "/$1/d" "$destfile"
fi
printf '%s\n' "$content" >>"$destfile"
done < "$fname"
done
Make it parallel (as much as you can).
#!/bin/bash
set -e -o pipefail
declare -ir MAX_PARALLELISM=20 # pick a limit
declare -i pid
declare -a pids
# ...
for fname in "${DATA_FILE_SOURCE}/"*; do
if ((${#pids[#]} >= MAX_PARALLELISM)); then
wait -p pid -n || echo "${pids[pid]} failed with ${?}" 1>&2
unset 'pids[pid]'
fi
while IFS= read -r line; do
FILE_TO_WRITE_TO="..."
# ...
done < "${fname}" & # forking here
pids[$!]="${fname}"
done
for pid in "${!pids[#]}"; do
wait -n "$((pid))" || echo "${pids[pid]} failed with ${?}" 1>&2
done
Here’s a directly runnable skeleton showing how the harness above works (with 36 items to process and 20 parallel processes at most):
#!/bin/bash
set -e -o pipefail
declare -ir MAX_PARALLELISM=20 # pick a limit
declare -i pid
declare -a pids
do_something_and_maybe_fail() {
sleep $((RANDOM % 10))
return $((RANDOM % 2 * 5))
}
for fname in some_name_{a..f}{0..5}.txt; do # 36 items
if ((${#pids[#]} >= MAX_PARALLELISM)); then
wait -p pid -n || echo "${pids[pid]} failed with ${?}" 1>&2
unset 'pids[pid]'
fi
do_something_and_maybe_fail & # forking here
pids[$!]="${fname}"
echo "${#pids[#]} running" 1>&2
done
for pid in "${!pids[#]}"; do
wait -n "$((pid))" || echo "${pids[pid]} failed with ${?}" 1>&2
done
Strictly avoid external processes (such as awk, grep and cut) when processing one-liners for each line. fork()ing is extremely inefficient in comparison to:
Running one single awk / grep / cut process on an entire input file (to preprocess all lines at once for easier processing in bash) and feeding the whole output into (e.g.) a bash loop.
Using Bash expansions instead, where feasible, e.g. "${line/,/.}" and other tricks from the EXPANSION section of the man bash page, without fork()ing any further processes.
Off-topic side notes:
ls -1 is unnecessary. First, ls won’t write multiple columns unless the output is a terminal, so a plain ls would do. Second, bash expansions are usually a cleaner and more efficient choice. (You can use nullglob to correctly handle empty directories / “no match” cases.)
Looping over the output from cat is a (less common) useless use of cat case. Feed the file into a loop in bash instead and read it line by line. (This also gives you more line format flexibility.)

How to properly iterate through a list using sshpass with a single ssh-login

Situation: we're feeding a list of filenames to an sshpass and it iterates correctly through a remote folder to check whether files with the given names actually exists, then build an updated list containing only the files that do exist, which is reused later in the bash script.
Problem: The list comprises sometimes tens of thousands of files, which means tens of thousands of ssh logins, which is harming performance and sometimes getting us blocked by our own security policies.
Intended solution: instead of starting the for-loop and calling sshpass each time, do it otherwise and pass the loop to an unique sshpass call.
I've got to pass the list to the sshpass instruction in the example test below:
#!/bin/bash
all_paths=(`/bin/cat /home/user/filenames_to_be_tested.list`)
existing_paths=()
sshpass -p PASSWORD ssh -n USER#HOST bash -c "'
for (( i=0; i<${#all_paths[#]}; i++ ))
do
echo ${all_paths[i]}
echo \"-->\"$i
if [[ -f ${all_paths[i]} ]]
then
echo ${all_paths[i]}
existing_paths=(${all_paths[i]})
fi
done
'
printf '%s\n' "${existing_paths[#]}"
The issue here is that it appears to loop (you see a series of echoed lines), but in the end it is not really iterating the i and is always checking/printing the same line.
Can someone help spot the bug? Thanks!
The problem is that bash first parses the string and substitutes the variables. That happens before it's sent to the server. If you want to stop bash from doing that, you should escape every variable that should be executed on the server.
#! /bin/bash
all_paths=(rootfs.tar derp a)
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
files=(${all_paths[#]})
existing_paths=()
for ((i=0; i<\${#files[#]}; i++)); do
echo -n \"\${files[#]} --> \$i\"
if [[ -f \${files[\$i]} ]]; then
echo \${files[\$i]}
existing_paths+=(\${files[\$i]})
else
echo 'Not found'
fi
done
printf '%s\n' \"\${existing_paths[#]}\"
This becomes hard to read very fast. However, there's an option I personally like to use. Create functions and export them to the server to be executed there to omit escaping a lot of stuff.
#! /bin/bash
all_paths=(rootfs.tar derp a)
function files_exist {
local files=($#)
local found=()
for file in ${files[#]}; do
echo -n "$file --> "
if [[ -f $file ]]; then
echo "exist"
found+=("$file")
else
echo "missing"
fi
done
printf '%s\n' "${found[#]}"
}
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
$(typeset -f files_exist)
files_exist ${all_paths[#]}
"

Use SED to comment out cronjobs (not that simple)

I have a rather large BASH function that I'm working on. This function is a CronJob generator. This script is intended to be run with SUDO privileges, and will allow the user to inspect the unprivileged user's Crontab file. They can create new cronjobs (a few questions and it does the proper syntax for them), they can also remove a cronjob. That's where I've hit a wall.
In this part of my CASE statement, the user has been asked if they want to create a new cronjob -- they reply with "N" or "n" and we get here:
#!/bin/bash
read -r -p $'Would you like to create a new cronjob? [y/n]\n\n--> ' CRON
case "$CRON" in
y|Y)
echo "not pertinent to this discussion"
;;
n|N)
read -r -p $'\n\nWould you like to REMOVE a crontab entry? [y/n]: ' REMOVE
case "$REMOVE" in
Y|y)
declare -a CRONTAB
while IFS= read -r LINE
do
CRONTAB+=("$LINE")
done < <(grep -v '#' /var/spool/cron/"$SCRIPTUSER")
echo -en "\nPlease select a cronjob to remove from the Crontab file:\n\n"
PS3=$'\n\nPlease enter your selection: '
select LINE in "${CRONTAB[#]}"
do
echo "Going to remove \"$LINE\""
read -r -p $'Is this correct? [y/n]' CHOICE
case "$CHOICE" in
Y|y)
sed "s/$LINE/^#&/g" -i /var/spool/cron/"$SCRIPTUSER"
break
;;
N|n)
break
;;
esac
done
echo -en "\n\nCurrent Crontab entries for $SCRIPTUSER:\n\n"
echo -en "\n\n######################################\n\n$(grep -v '#' /var/spool/cron/"$SCRIPTUSER")\n\n######################################\n\n"
sleep 3
break
;;
N|n)
break
;;
esac
;;
esac
The problem I'm having is these are my cronjob entries I'm testing with:
The SED statement doesn't seem to be doing anything at all. I would imagine the '*' and '/' are probably messing with the SED pattern, and I have already tried a sed where I escaped all the '/' but it still passed over it like nothing was there.
I appreciate the extra set of eyeballs, thank you!

Bash script support piping data into it

I have a bash script that I want to expand to support piping json into.
Example:
echo '{}' | myscript store
So, I tried the following:
local value="$1"
if [[ -z "$value" ]]; then
while read -r piped; do
value=$piped
done;
fi
Which works in a simple case above, but doing:
cat input.json | myscript store
Only get's the last line of the file input.json, it does not handle every line.
How can I support all cases of piping?
The following works:
if [[ -z "$value" && ! -t 0 ]]; then
while read -r piped; do
value+=$piped
done;
fi
The trick was using += and also checking ! -t 0 which checks if we are piping.
If you want to behave like cat, why not use it?
#! /bin/bash
value="$( cat "$#" )"

how to create the option for printing out statements vs executing them in a shell script

I'm looking for a way to create a switch for this bash script so that I have the option of either printing (echo) it to stdout or executing the command for debugging purposes. As you can see below, I am just doing this manually by commenting out one statement over the other to achieve this.
Code:
#!/usr/local/bin/bash
if [ $# != 2 ]; then
echo "Usage: testcurl.sh <localfile> <projectname>" >&2
echo "sample:testcurl.sh /share1/data/20110818.dat projectZ" >&2
exit 1
fi
echo /usr/bin/curl -c $PROXY --certkey $CERT --header "Test:'${AUTH}'" -T $localfile $fsProxyURL
#/usr/bin/curl -c $PROXY --certkey $CERT --header "Test:'${AUTH}'" -T $localfile $fsProxyURL
I'm simply looking for an elegant/better way to create like a switch from the command line. Print or execute.
One possible trick, though it will only work for simple commands (e.g., no pipes or redirection (a)) is to use a prefix variable like:
pax> cat qq.sh
${PAXPREFIX} ls /tmp
${PAXPREFIX} printf "%05d\n" 72
${PAXPREFIX} echo 3
What this will do is to insert you specific variable (PAXPREFIX in this case) before the commands. If the variable is empty, it will not affect the command, as follows:
pax> ./qq.sh
my_porn.gz copy_of_the_internet.gz
00072
3
However, if it's set to echo, it will prefix each line with that echo string.
pax> PAXPREFIX=echo ./qq.sh
ls /tmp
printf %05d\n 72
echo 3
(a) The reason why it will only work for simple commands can be seen if you have something like:
${PAXPREFIX} ls -1 | tr '[a-z]' '[A-Z]'
When PAXPREFIX is empty, it will simply give you the list of your filenames in uppercase. When it's set to echo, it will result in:
echo ls -1 | tr '[a-z]' '[A-Z]'
giving:
LS -1
(not quite what you'd expect).
In fact, you can see a problem with even the simple case above, where %05d\n is no longer surrounded by quotes.
If you want a more robust solution, I'd opt for:
if [[ ${PAXDEBUG:-0} -eq 1 ]] ; then
echo /usr/bin/curl -c $PROXY --certkey $CERT --header ...
else
/usr/bin/curl -c $PROXY --certkey $CERT --header ...
fi
and use PAXDEBUG=1 myscript.sh to run it in debug mode. This is similar to what you have now but with the advantage that you don't need to edit the file to switch between normal and debug modes.
For debugging output from the shell itself, you can run it with bash -x or put set -x in your script to turn it on at a specific point (and, of course, turn it off with set +x).
#!/usr/local/bin/bash
if [[ "$1" == "--dryrun" ]]; then
echoquoted() {
printf "%q " "$#"
echo
}
maybeecho=echoquoted
shift
else
maybeecho=""
fi
if [ $# != 2 ]; then
echo "Usage: testcurl.sh <localfile> <projectname>" >&2
echo "sample:testcurl.sh /share1/data/20110818.dat projectZ" >&2
exit 1
fi
$maybeecho /usr/bin/curl "$1" -o "$2"
Try something like this:
show=echo
$show /usr/bin/curl ...
Then set/unset $show accordingly.
This does not directly answer your specific question, but I guess you're trying to see what command gets executed for debugging. If you replace #!/usr/local/bin/bash with #!/usr/local/bin/bash -x bash will run and echo the commands in your script.
I do not know of a way for "print vs execute" but I know of a way for "print and execute", and it is using "bash -x". See this link for example.

Resources