Having problems with the output of a basic script - bash

I copied that code to make a THM exercise, I understand it and it does his job that's passing names from a wordlist ($2) to steghide to try to crack the image ($1), and it works, but the problem is that it doesn't show correctly the correct password, it just stops in the word before it, and if you click enter it keeps going, I would like to just stop when it finds it and show me the password, here's the code:
for word in $(cat $2); do
steghide extract -sf $1 -p $word &> /dev/null
if [ $? == 0 ]; then
echo
echo "[!] PWD FOUND - $word [!]"
break
else
echo "NOPE - $word"
fi
done

Steghide was asking me if I wanted to re-write the output file as I already did this process once, so that was the only problem, my script wasn't expecting another input request from steghide.

Related

Take a file name as input and check if it exists

How do I create a Bash script that takes a file name as input? Then, if that file exists, it should print "File exists"; if not, print "File does not exist".
For example, if I ran ./do-i-exist.sh ./do-i-exist.sh, the output should be only 'File exists'
file="$1"
read answer
if [ $file != -$2 ]
then
echo "File exists"
else
echo "File does not exist"
fi
This is what I'm working with but is not working for me, whenever I add an extension like .sh, .txt or something similar it won't find the file.
The test if a file exists can be done like this
if [ -f "$file" ]
then
This tests for a regular file, not for other kinds of files like a directory.
This is how you can do it. Pass the name of the file while like ./do-i-exist.sh file_path.
if [ -f "$1" ]
then
echo "File Exists"
else
echo "File does not exist"
fi
First of all, I want to thank anyone and everyone who tried to help. After 3 hard working days, I found the answer, here it is:
#!/bin/bash
file="$#"
if [ -f $file ]
then
echo "File exists"
else
echo "File does not exist"
fi
Using this table:
Variable Name
Description
$0
The name of the Bash script
$1 - $9
The first 9 arguments to the Bash script
$#
Number of arguments passed to the Bash script
$#
All arguments passed to the Bash script
$?
The exit status of the most recently run process
$$
The process ID of the current script
$USER
The username of the user running the script
$HOSTNAME
The hostname of the machine
$RANDOM
A random number
$LINENO
The current line number in the script
I and other users were focused on using $1 from my understanding this refers to the first argument passed to the script but for some reason, it wasn't working since it needed to pass more inputs.
As from my previous comments I didn't have control over the input. The input was hidden in a locked file, and I needed to feed my script to it.
From what we know $0 is only used to check for the file names, $1 to get the first statement and $# will just take anything(I guess).
I know absolutely nothing about bash and it was the first time ever using it, which is why it took me 3 days to solve this puzzle. This was part of a CTF and just like me, many others may struggle in the future to understand or know how to make a script that will just adapt to a series of inputs from a second script.
This is how it was supported to work:
I was given access to a very restricted server and on this server, I was given the encrypted-file.sh file. This file was supposed to be fed to /path/to/myfile.sh then encrypted-file.sh would execute a second command to open a third locked file hiding a flag on it.
This only works with the right bash file using the right variables on it for encrypted-file.sh to run without errors, which is what I accomplished here.
I used a while loop because it made sense in my case because I really needed a file for the script to work.
restore_file="$1"
while [ ! -f "$restore_file" ]
do
echo "File not found: $restore_file"
echo "Please provide a valid file:"
read restore_file
done
As written above, $1 is the first argument given to the script. In this case if no argument is given or that is not a file, it will prompt again.
By the way, use -d instead of -f to check for a directory.

Compare strings if contains in bash

I am trying to implement bash script that is reading from error log file and comparing strings with exceptions.
I am trying to compare it with if
error="[*] Text Text # level 4: 'Some text' [parent = 'Not found'] "
exception="'Not found'"
if [[ "${error}" == *"${exception}"* ]]; then
echo "Yes it contains!"
fi
In this case I would expect the script to return "Yes it contains!", but it doesn't work as I expected. But it is true that my logs contain special character as well, anyone knows how should I handle that and compare it?
For me my if also works, but I might have something wrong in my nested loop. I am running the script in following process.
I have file with errors called mylogfile.txt:
[*] Text Text # level 4: 'Some text' [parent = 'Not found']
Then I have another file where I have exceptions inserted exception.txt:
'Not found'
I do a loop over both files to see if I find anything:
while IFS='' read -r line || [ -n "$line" ]; do
exception="$line"
while IFS='' read -r line || [ -n "$line" ]; do
err="$line"
if [[ "${err}" == *"${exception}"* ]]; then
echo "Yes it contains!"
fi
done < "mylogfile.txt"
done < "exception.txt"
I don't see anything wrong with your script, and it works when I run it.
That said, looping over files line by line in a shell script is a code smell. Sometimes it's necessary, but often you can trick some command or another into doing the hard work for you. When searching through files, think grep. In this case, you can actually get rid of both loops with a single grep!
$ grep -f exception.txt mylogfile.txt
[*] Text Text # level 4: 'Some text' [parent = 'Not found']
To use it in an if statement, add -q to suppress its normal output and just check the exit code:
if grep -qf exception.txt mylogfile.txt; then
echo "Yes, it's contained!"
fi
From the grep(1) man page:
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing.
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected.
Use grep if you want exact match
if grep -q "$exception" <<< "$error"; then
echo "Yes it contains!"
fi
Use -i switch to ignore case

What is wrong with my bash script?

What I have to to is edit a script given to me that will check if the user has write permission for a file named journal-file in the user's home directory. The script should take appropriate actions if journal-file exists and the user does not have write permission to the file.
Here is what I have written so far:
if [ -w $HOME/journal-file ]
then
file=$HOME/journal-file
date >> file
echo -n "Enter name of person or group: "
read name
echo "$name" >> $file
echo >> $file
cat >> $file
echo "--------------------------------" >> $file
echo >> $file
exit 1
else
echo "You do not have write permission."
exit 1
fi
When I run the script it prompt me to input the name of the person/group, but after I press enter nothing happens. It just sits there allowing me to continue inputting stuff and doesn't continue past that part. Why is it doing this?
The statement:
cat >>$file
will read from standard input and write to the file. That means it will wait until you indicate end of file with something like CTRL-D. It's really no different from just typing cat at a command line and seeing that nothing happens until you enter something and it waits until you indicate end of file.
If you're trying to append another file to the output file, you need to specify its name, such as cat $HOME/myfile.txt >>$file.
If you're trying to get a blank line in there, use echo rather than cat, such as echo >>$file.
You also have a couple of other problems, the first being:
date >> file
since that will try to create a file called file (in your working directory). Use $file instead.
The second is the exit code of 1 in the case where what you're trying to do has succeeded. That may not be a problem now but someone using this at a later date may wonder why it seems to indicate failure always.
To be honest, I'm not really a big fan of the if ... then return else ... construct. I prefer fail-fast with less indentation and better grouping of output redirection, such as:
file=${HOME}/journal-file
if [[ ! -w ${file} ]] ; then
echo "You do not have write permission."
exit 1
fi
echo -n "Enter name of person or group: "
read name
(
date
echo "$name"
echo
echo "--------------------------------"
echo
) >>${file}
I believe that's far more readable and maintainable.
It's this line
cat >> $file
cat is concatenating input from standard input (ie whatever you type) to $file
I think the part
cat >> $file
copies everything from stdin to the file. Maybe if you hid Ctrl+D (end of file) the script can continue.
1) You better check first whether the file exists or not:
[[ -e $HOME/journal-file ]] || \
{ echo "$HOME/journal-file does not exist"; exit 1 }
2) You gotta change "cat >> $file" for whatever you want to do with the file. This is the command that is blocking the execution of the script.

Parsing command output in bash to variables

I have a number of bash scripts, each doing its own thing merrily. Do note that while I program in other languages, I only use Bash to automate things, and am not very good at it.
I'm now trying to combine a number of them to create "meta" scripts, if you will, which use other scripts as steps. The problem is that I need to parse the output of each step to be able to pass a part of it as params to the next step.
An example:
stepA.sh
[...does stuff here...]
echo "Task complete successfuly"
echo "Files available at: $d1/$1"
echo "Logs available at: $d2/$1"
both the above are paths, such as /var/www/thisisatest and /var/log/thisisatest (note that files always start with /var/www and logs always start with /var/log ). I'm only interested in the files path.
steB.sh
[...does stuff here...]
echo "Creation of $d1 complete."
echo "Access with username $usr and password $pass"
all variables here are simple strings, that may contain special characters (no spaces)
What I'm trying to build is a script that runs stepA.sh, then stepB.sh and uses the output of each to do its own stuff. What I'm currently doing (both above scripts are symlinked to /usr/local/bin without the .sh part and made executable):
#!/bin/bash
stepA $1 | while read -r line; do
# Create the container, and grab the file location
# then pass it to then next pipe
if [[ "$line" == *:* ]]
then
POS=`expr index "$line" "/"`
PTH="/${line:$POS}"
if [[ "$PTH" == *www* ]]
then
#OK, have what I need here, now what?
echo $PTH;
fi
fi
done
# Somehow get $PTH here
stepB $1 | while read -r line; do
...
done
#somehow have the required strings here
I'm stuck in passing the PTH to the next step. I understand this is because piping runs it in a subshell, however all examples I've seen refer to to files and not commands, and I could not make this to work. I tried piping the echo to a "next step" such as
stepA | while ...
echo $PTH
done | while ...
#Got my var here, but cannot run stuff
done
How can I run stepA and have the PTH variable available for later?
Is there a "better way" to extract the path I need from the output than nested ifs ?
Thanks in advance!
Since you're using bash explicitly (in the shebang line), you can use its process substitution feature instead of a pipe:
while read -r line; do
if [[ "$line" == *:* ]]
.....
fi
done < <(stepA $1)
Alternately, you could capture the command's output to a string variable, and then parse that:
output="$(stepA $1)"
tmp="${output#*$'\nFiles available at: '}" # output with everything before the filepath trimmed
filepath="${tmp%%$'\n'*}" # trim the first newline and everything after it from $tmp
tmp="${output#*$'\nLogs available at: '}"
logpath="${tmp%%$'\n'*}"

passing variables to a bash script

I have a bash script "test.sh" and one parameter I want to use is --no-email.
when I run test.sh --no-email, everything works as expected and I do not receive an email status report.
However what I really want to run is "test.sh test.cnf" where the --no-email parameter is stored in the test.cnf file along with a load of other parameters. I cant for the life of me get this to work. Perhaps I am being completely stupid and not understanding?
Many thanks
echo $*|grep -se '--no-email'&>/dev/null
SEND_MAIL=`echo $?`
echo -e "DEBUG: \$*=$*"
if [ ! "$SEND_MAIL" == "0" ]; then
echo 'Mail would have been sent!'
else
echo 'NO MAIL WOULD HAVE BEEN SENT!'
fi
If you cannot modify the script test.sh to support this you still can use this syntax to fetch parameters from a config file:
test.sh $(<test.cnf)
If this assumption is not true, i.e. you want to modify test.sh itself to support this then you have to be more specific about what happens inside test.sh.
Edit: Now the content of test.sh has been added to the question. Starting from there the most simple thing to do would be like this:
grep -sqe '--no-email' "$*"
SEND_MAIL=$?
But you wrote that you have a bunch of other paramaters. Doing a grep for each one might be inconvenient. In this case you can loop over the word of a cnf file like this:
#!/bin/bash
while read line; do
for word in $line; do
echo "examing $word"
case "$word" in
--no-email)
SEND_MAIL=0
;;
--no-foo)
NO_FOO=0
;;
*)
echo 1>&2 "WARNING: Unknown parameter: $word"
;;
esac
done
done < "$1"

Resources