How do I fix my error saying the home directory is invalid? - bash

I'm still learning how to work with bash but my program is part of part 1 which is posted below but they are wanting us to use the file cs.rosters.txt to generate the new_users.txt file.
The cs.rosters.txt looks like this:
CMPSC 1513 03|doejan|Doe, Jane|0510|0350
CMPSC 1513 03|smijoh|Smith, John|0510
CMPSC 1133 01|cp2stu3| CPII, Student3 Neither|2222|0020
Below is what I created for part 1 which runs correctly:
#!/bin/bash
while read -r line || [[ -n "$line" ]]; do
username=$(echo "$line" | cut -d: -f1)
GECOS=$(echo "$line" | cut -d: -f5)
homedir=$(echo "$line" | cut -d: -f6)
echo "adduser -g '$GECOS' -d '$homedir' -s /bin/bash '$username'"
done < "new_users.txt"
Here's where I'm struggling
The code above pretty much displays the information in the new_users.txt file
I'm struggling on the second part, my code is posted below:
What I want to do is create a script that generates the cs.rosters.txt file like I did above.
#!/bin/bash
while read line; do
user=$(echo $line | cut -d'|' -f2)
pass=$(echo $line | cut -d'|' -f3)
home=$(echo $line | cut -d'|' -f4)
useradd -m -d $home -s /bin/bash $user
echo $pass | passwd --stdin $user
done < cs_roster.txt
I'm getting two errors: one is saying the Computer directory is invalid and the function --stdin is invalid. Can you help me?

Related

Shell Script do while flow

I have a script whose content is like:
#!/bin/bash
DB_FILE='pgalldump.out'
echo $DB_FILE
DB_NAME=''
egrep -n "\\\\connect\ $DB_NAME" $DB_FILE | while read LINE
do
DB_NAME=$(echo $LINE | awk '{print $2}')
STARTING_LINE_NUMBER=$(echo $LINE | cut -d: -f1)
STARTING_LINE_NUMBER=$(($STARTING_LINE_NUMBER+1))
TOTAL_LINES=$(tail -n +$STARTING_LINE_NUMBER $DB_FILE | \
egrep -n -m 1 "PostgreSQL\ database\ dump\ complete" | \
head -n 1 | \
cut -d: -f1)
tail -n +$STARTING_LINE_NUMBER $DB_FILE | head -n +$TOTAL_LINES > /backup/$DB_NAME.sql
done
I know what it is doing. But i have a doubt about the flow of do while in this case. At line egrep -n "\\\\connect\ $DB_NAME" $DB_FILE | while read LINE will egrep runs first or while . Because DB_NAME is empty at start of code.
Could anyone please explain the flow of do while in this case.

Logical Approach on bash

#!/bin/bash
echo -n "Enter the domain name > "
read name
dig -t ns "$name" | cut -d ";" -f3 | cut -d ":" -f2| grep ns | cut -f6 > registerfile.txt;
cat registerfile.txt | while read line; do dig axfr "#$line" "$name"; done | cut -d"." -f-4 > nmap.txt
It is done till this section. Below, it could be not matched the line and name parameters. How should be changed?
cat nmap.txt | while read line; do if [ "$line" == "$name" ]; then host "$line"; fi; done > ping.txt
cat ping.txt | cut -d " " -f4 | while read line; do if [[ "$line" =~ ^[0-9]+$ ]]; then nmap -sS "$line";fi ;done
It's not clear where exactly things are going wrong, but here is a refactoring which might hopefully at least nudge you in the right direction.
#!/bin/bash
read -p "Enter the domain name > " name
dig +short -t ns "$name" |
tee registerfile.txt |
while read line; do
dig axfr "#$line" "$name"
done |
cut -d"." -f-4 |
tee nmap.txt |
while read line; do
if [ "$line" = "$name" ]; then
host "$line"
fi
done > ping.txt
cut -d " " -f4 ping.txt |
grep -E '^[0-9]+$' |
xargs -r -n 1 nmap -sS
Your remark in comments that if [ "$line" = "$name" ]; then host "$line"; fi isn't working suggests that the logic there is somehow wrong. It currently checks whether each line is identical to the original domain name, and then looks it up over and over again in those cases, which seems like a curious thing to do; but given only the code and the "does not work", it's hard to say what it's really supposed to accomplish. If you actually want something else, you need to be more specific about what you require. Perhaps you are actually looking for something like
... tee nmap.txt |
# Extract the lines which contain $name at the end
grep "\.$name\$" |
xargs -n 1 dig +short |
tee ping.txt |
grep -E '^[0-9]+$' ...
The use of multiple statically-named files is an antipattern; obviously, if these files serve no external purpose, just take out the tee commands and run the entire pipeline with no in-between output files. If you do need these files, having them overwritten on each run seems problematic -- maybe add a unique date stamp suffix to the file names?

shell script - select CSV lines by float value

I'm stuck with a strange behavior while reading a CSV file and selecting its lines with a specific column float value.
Here's an extract from the input file.
ben#truc:$ head summary.fasta.csv
scf7180000753635;170043549;XP_001849446.1;27.72;184;2e-13;74.7
scf7180000753636;340728919;XP_003402759.1;25.78;322;8e-19;93.6
scf7180000753642;328716306;XP_003245892.1;33.51;191;7e-27;119
scf7180000753642;512919417;XP_004929373.1;43.18;132;1e-23;108
scf7180000753642;512914080;XP_004928052.1;40.16;127;5e-21;94.7
scf7180000753664;328696819;XP_003240139.1;37.99;179;2e-23;107
scf7180000753664;328696819;XP_003240139.1;26.67;30;2e-23;25.4
scf7180000753664;328703138;XP_003242103.1;31.65;218;1e-20;99.4
scf7180000753669;383855900;XP_003703448.1;68.92;74;2e-23;102
scf7180000753669;380030611;XP_003698937.1;72.06;68;3e-22;99.8
Here's my shell script code:
#!/bin/sh
echo "extracting the values"
# prepare output files
echo "" > "40_sequence_identity.csv"
echo "" > "60_sequence_identity.csv"
echo "" > "80_sequence_identity.csv"
while read -r line; do
#debug: check if line is correclty read
echo $line
#attribute each CSV column value to a variable
query=`echo $line | cut -d ';' -f1`
gi=`echo $line | cut -d ';' -f2`
refseq=`echo $line | cut -d ';' -f3`
seq_identity=`echo $line | cut -d ';' -f4`
align_length=`echo $line | cut -d ';' -f5`
evalue=`echo $line | cut -d ';' -f6`
score=`echo $line | -d ';' -f7`
#debug: check if cut command is OK
echo "seqidentity:"$seq_identity
# test float value of column 4, if superior to a threshold, write the line in a specific line
if [ $( echo "$seq_identity >= 40" | bc ) ]; then
echo "$line" >> "40_sequence_identity.csv"
fi
if [ $( echo "$seq_identity >= 60" | bc ) ]; then
echo "$line" >> "60_sequence_identity.csv"
fi
if [ $( echo "$seq_identity >= 80" | bc ) ]; then
echo "$line" >> "80_sequence_identity.csv"
fi
done < "summary.fasta.csv"
echo "DONE!"
And here's the strange outputs.
extracting the values
scf7180000753635;170043549;XP_001849446.1;27.72;184;2e-13;74.7
./create_project_directories.sh: 1: ./create_project_directories.sh: -d: not found
seqidentity:27.72
scf7180000753636;340728919;XP_003402759.1;25.78;322;8e-19;93.6
./create_project_directories.sh: 1: ./create_project_directories.sh: -d: not found
seqidentity:25.78
scf7180000753642;328716306;XP_003245892.1;33.51;191;7e-27;119
./create_project_directories.sh: 1: ./create_project_directories.sh: -d: not found
seqidentity:33.51
scf7180000753642;512919417;XP_004929373.1;43.18;132;1e-23;108
./create_project_directories.sh: 1: ./create_project_directories.sh: -d: not found
seqidentity:43.18
scf7180000753642;512914080;XP_004928052.1;40.16;127;5e-21;94.7
./create_project_directories.sh: 1: ./create_project_directories.sh: -d: not found
seqidentity:40.16
scf7180000753664;328696819;XP_003240139.1;37.99;179;2e-23;107
./create_project_directories.sh: 1: ./create_project_directories.sh: -d: not found
seqidentity:37.99
scf7180000753664;328696819;XP_003240139.1;26.67;30;2e-23;25.4
./create_project_directories.sh: 1: ./create_project_directories.sh: -d: not found
seqidentity:26.67
First, the 3 output files (blast_summary_superior_40_sequence_identity.csv ...) contain all the lines, as if the tests didn't work.
Second, the file parsing seems OK, but this strange message: -d: not found , comes from nowhere.Though, it appears before the 'echo' displaying the value of $seqidentity and is probably related to the cut command.
Any idea why I have such output ?
When I manually execute the commands in the console, this works.
But not when executing the whole script.
Thanks for your help.
You are getting Error : -d: not found because on line number 17 command is incomplete
score=`echo $line | -d ';' -f7`
So it should be :
score=$(echo $line | cut -d ';' -f7)

Shell Script: Read line in file

I have a file paths.txt:
/my/path/Origin/.:your/path/Destiny/.
/my/path/Origin2/.:your/path/Destiny2/.
/...
/...
I need a Script CopyPaste.sh using file paths.txt to copy all files in OriginX to DestinyX
Something like that:
#!/bin/sh
while read line
do
var= $line | cut --d=":" -f1
car= $line | cut --d=":" -f2
cp -r var car
done < "paths.txt"
Use translate : tr command & apply cp command in the same go!
#!/bin/sh
while read line; do
cp `echo $line | tr ':' ' '`
done < "paths.txt"
You need to use command substitution to get command's output into a shell variable:
#!/bin/sh
while read line
do
var=`echo $line | cut --d=":" -f1`
car=`echo $line | cut --d=":" -f2`
cp -r "$var" "$car"
done < "paths.txt"
Though your script can be simplified using read -d:
while read -d ":" var car; do
cp -r "$var" "$car"
done < "paths.txt"

hash each line in text file

I'm trying to write a little script which will open a text file and give me an md5 hash for each line of text. For example I have a file with:
123
213
312
I want output to be:
ba1f2511fc30423bdbb183fe33f3dd0f
6f36dfd82a1b64f668d9957ad81199ff
390d29f732f024a4ebd58645781dfa5a
I'm trying to do this part in bash which will read each line:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line
done
later on I do:
$ more 123.txt | ./read.line.by.line.sh | md5sum | cut -d ' ' -f 1
but I'm missing something here, does not work :(
Maybe there is an easier way...
Almost there, try this:
while read -r line; do printf %s "$line" | md5sum | cut -f1 -d' '; done < 123.txt
Unless you also want to hash the newline character in every line you should use printf or echo -n instead of echo option.
In a script:
#! /bin/bash
cat "$#" | while read -r line; do
printf %s "$line" | md5sum | cut -f1 -d' '
done
The script can be called with multiple files as parameters.
You can just call md5sum directly in the script:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line | md5sum | awk '{print $1}'
done
That way the script spits out directly what you want: the md5 hash of each line.
this worked for me..
cat $file | while read line; do printf %s "$line" | tr -d '\r\n' | md5 >> hashes.csv; done

Resources