#!/bin/bash
commonguess(){
for guess in $(< passwordlist)
do
try=$(echo "$guess" | sha256sum | awk '{print $1}' )
if [ "$try" == "$xxx" ]
then
echo "$name:$try"
return 0
fi
done
return 1
}
dict(){...}
brute(){...}
while IFS=':' read -r name hashing;do
commonguess || dict || brute
done
my code has been fixed, and i need to do one more thing. when i run function brute, it should stop after 2 mins. I know sleep command can make the script pause, however i have been told it is not a good idea to use "kill". So i am wondering is there any way to do this.
You'd get a better answer if you were more specific about what brute actually consists of.
If it's pure shell itself, the cleanest way to have it manage its execution time is to check the SECONDS variable frequently enough so you can abort the process by yourself it it ever goes over some threshold.
It it's external, you're not going to be able to avoid kill. Or some invoke wrapper that handles timeout on its own, that probably already exists.
Related
This is the bash script.
Counter.sh:
#!/bin/bash
rm -rf home/pi/temp.mp3
cd /home/pi/
now=$(date +"%d-%b-%Y")
count="countshift1.sh"
mkdir $(date '+%d-%b-%Y')
On row 5 of this script, the count variable... I just want to know how to use AWK to change the integer 1 (the 18th character, thanks for the response) into a 3 and then save the Counter.sh file.
This is basically http://mywiki.wooledge.org/BashFAQ/050 -- assuming your script actually does something with $count somewhere further down, you should probably refactor that to avoid this antipattern. See the linked FAQ for much more on this topic.
Having said that, it's not hard to do what you are asking here without making changes to live code. Consider something like
awk 'END { print 5 }' /dev/null > file
in a cron job or similar (using Awk just because your question asks for it, not because it's the best tool for this job) and then in your main script, using that file;
read index <file
count="countshift$index.sh"
While this superficially removes the requirement to change the script on the fly (which is a big win) you still have another pesky problem (code in a variable!), and you should probably find a better way to solve it.
I don't think awk is the ideal tool for that. There are many ways to do it.
I would use Perl.
perl -pi -e 's/countshift1/countshift3/' Counter.sh
#!/bin/bash
z=1
b=$(date)
while [[ $z -eq 1 ]]
do
a=$(date)
if [ "$a" == "$b" ]
then
b=$(date -d "+7 days")
rsync -v -e ssh user#ip_address:~/sample.tgz /home/kartik2
sleep 1d
fi
done
I want to rsync a file every week !! But if I start this script on every boot the file will be rsynced every time the system starts !! How to alter the code to satisfy week basis rsync ? ( PS- I don't want to do this through cronjob - school assignment)
You are talking about having this run for weeks, right? So, we have to take into account that the system will be rebooted and it needs to be run unattended. In short, you need some means of ensuring the script is run at least once every week even when no one is around. The options look like this
Option 1 (worst)
You set a reminder for yourself and you log in every week and run the script. While you may be reliable as a person, this doesn't allow you to go on vacation. Besides, it goes against our principle of "when no one is around".
Option 2 (okay)
You can background the process (./once-a-week.sh &) but this will not reliable over time. Among other things, if the system restarts then your script will not be operating and you won't know.
Option 3 (better)
For this to be reliable over weeks one option is to daemonize the script. For a more detailed discussion on the matter, see: Best way to make a shell script daemon?
You would need to make sure the daemon is started after reboot or system failure. For more discussion on that matter, see: Make daemon start up with Linux
Option 4 (best)
You said no cron but it really is the best option. In particular, it would consume no system resources for the 6 days, 23 hours and 59 minutes when it does not need to running. Additionally, it is naturally resilient to reboots and the like. So, I feel compelled to say that creating a crontab entry like the following would be my top vote: #weekly /full/path/to/script
If you do choose option 2 or 3 above, you will need to make modifications to your script to contain a variable of the week number (date +%V) in which the script last successfully completed its run. The problem is, just having that in memory means that it will not be sustained past reboot.
To make any of the above more resilient, it might be best to create a directory where you can store a file to serve as a semaphore (e.g. week21.txt) or a file to store the state of the last run. Something like once-a-week.state to which you would write a value when run:
date +%V > once-a-week.state # write the week number to a file
Then to read the value, you would:
file="/path/to/once-a-week.state" # the file where the week number is stored
read -d $'\x04' name < "$file"
echo "$name"
You would then need to check to see if the week number matched this present week number and handle the needed action based on match or not.
#!/bin/bash
z=1
b=$(cat f1.txt)
while [[ $z -eq 1 ]]
do
a=$(date +"%d-%m-%y")
if [ "$a" == "$b" ] || [ "$b" == "" ] || [$a -ge $b ]
then
b=$(date +"%d-%m-%y" -d "+7 days")
echo $b > f1.txt
rsync -v -e ssh HOST#ip:~/sample.tgz /home/user
if [ $? -eq 0 ]
then
sleep 1d
fi
fi
done
This code seems to works well and good !! Any changes to it let me know
#/bin/bash
ls |sort -R |tail -$N |while read file; do
mpg123 "$file"
sleep 3
done
any idea why it only plays 10 mp3's and exits?
There are hundreds of mp3's in the same directory as this file (playmusic.sh)
Thanks
As Marc B said the problem occurs due to the variable N not being set which leads tail to default to its default number of lines which is 10. (Obviously it can also occur if N is actually set to 10.)
The fundamental problem here is you didn't understand what this code actually does. I suspect you didn't actually write this code yourself. Even though it's a bash script, it expects a variable N to be set. This is highly unorthodox for a bash script, you would normally use
$1
instead of $N, or better still
${1:?}
which would display an error and exit immediately, if you forgot to pass in a command-line argument.
I am trying to solve an optimization problem and to find the most efficient way of performing the following commands:
whois -> sed -> while (exit while) ->perform action
while loop currently look like
while [x eq smth]; do
x=$((x+1))
done
some action
Maybe it is more efficient to have while true with an if inside (if clause the same as for while). Also, what is the best case using bash to evaluate the time required for every single step?
The by far biggest performance penalty and most common performance problem in Bash is unnecessary forking.
while [[ something ]]
do
var+=$(echo "$expression" | awk '{print $1}')
done
will be thousands of times slower than
while [[ something ]]
do
var+=${expression%% *}
done
Since the former will cause two forks per iteration, while the latter causes none.
Things that cause forks include but are not limited to pipe | lines, $(command expansion), <(process substitution), (explicit subshells), and using any command not listed in help (which type somecmd will identify as 'builtin' or 'shell keyword').
Well for starters you could remove $(, this creates a subshell and is sure
to slow the task down somewhat
while [ x -eq smth ]
do
(( x++ ))
done
I've coded the following script to add users from a text file. It works, but I'm getting an error that says "too many arguments"; what is the problem?
#!/bin/bash
file=users.csv
while IFS="," read USRNM DOB SCH PRG PST ENROLSTAT ; do
if [ $ENROLSTAT == Complete ] ;
then
useradd $USRNM -p $DOB
else
echo "User $USRNM is not fully enrolled"
fi
done < $file
#cat users.csv | head -n 2 | tail -n 1
Use quotes. Liberally.
if [ "$ENROLSTAT" = Complete ]
(It's a single equal sign, too.) My greatest problem in shell programming is always hidden spaces. It's one of the reasons I write so much in Perl, and why, in Perl, I tell everyone on my team to avoid the shell whenever running external programs. There is just so much power in the shell, with so many little things that can trip you up, that I avoid it where possible. (And not where not possible.)