Currently I'm in the process of simplifying a process for extracting Windows password hashes in security audits. Personally I want to make the process easier to generate a list of recovered users and their passwords when I do an audit. I think it would also be useful for other people who are trying to compare and generate large amounts of data.
So here's the gist:
When I extract all of the data from the Windows system files, I simplify them down to the format user:hash, where the hash is an NTLM hash such as "a87f3a357d73085c45f9416be5787e86."
I then will use oclHashcat and attempt to crack the hashes, whether it be dictionary or brute-force, it doesn't matter. I generate an output of all of the recovered hashes, however Hashcat generates them in the format hash:password.
Now here's my problem and what I would like some input on - I want to produce the output as user:password given the two input files. Considering that I can have hundreds of hashes yet only a few recovered passwords, there is no use of trying to order the lists.
I am unsure which data structure might benefit me the most. Arrays were too inefficient for large tables. I've looked into serialization and I've been exploring the use of Hash Maps and Hash Tables. Given the size of the hash, I haven't had any luck implementing either of these methods, or I'm doing so incorrectly.
Currently I'm running the program like so:
program [user:hash file] [hash:password file] -o [user:password output]
And I'm effectively trying to run the program like so (briefly):
Load Files
// user:hash file
For each line, split by ':' delimiter
before delimiter = table1.user
after delimiter = table1.hash
// hash:password file
For each line, split by ':' delimiter
before delimiter = table2.hash
after delimiter = table2.password
// generate user:password file
Check each entry of table1 vs table2
if table1.hash = table2.hash
table1.user = output.user
table2.password = output.password
print to output "output.user:output.password"
I am only trying to figure out an efficient method for tracing through each line and extracting the necessary data into a data structure that I can easily trace through.
If I need to clarify anything, please let me know. Any help is appreciated!
I would do this for my data structure
std::map<std::string,std::string> hash_user_map;
No go through all the users and hashes from table 1
For each user in table1
hash_user_map[table1.hash] = table1.user;
No go through all the cracked passwords with hashes from table2
std::string user = hash_user_map[table2.hash];
std::cout << user << ":" << table2.password << "\n;
I decided to go with a shell script, and I used an associated array to match the required data that I needed.
Note: Most of this program deals with the way my hash files are formatted. Look at the comments in the script.
#!/bin/bash
# Signus
# User-Password Hash Comparison Tool v1.0
# Simple utility that allows for the comparison between a file with a 'user:hash' format to a separate file with a 'hash:password' format. The comparison matches the hashes and returns an output file in the format 'user:password'
# Example of 'user:hash' -> george:c21cfaebe1d69ac9e2e4e1e2dc383bac
# Example of 'hash:password' -> c21cfaebe1d69ac9e2e4e1e2dc383bac:password
# 'user:hash' obtained from creddump suite: http://code.google.com/p/creddump/
# Note: Used custom 'dshashes.py' file: http://ptscripts.googlecode.com/svn/trunk/dshashes.py
# 'hash:password' obtained from ocl-Hashcat output
usage="Usage: $0 [-i user:hash input] [-t hash:password input] [-o output]"
declare -a a1
declare -a a2
declare -A o
index1=0
index2=0
countA1=0
countA2=0
matchCount=0
if [ -z "$*" ]; then
echo $usage
exit 1
fi
if [ $# -ne 6 ]; then
echo "Error: Invalid number of arguments."
echo $usage
exit 1
fi
echo -e
echo "---Checking Files---"
while getopts ":i:t:o:" option; do
case $option in
i) inputFile1="$OPTARG"
if [ ! -f $inputFile1 ]; then
echo "Unable to find or open file: $inputFile1"
exit 1
fi
echo "Reading...$inputFile1"
;;
t) inputFile2="$OPTARG"
if [ ! -f $inputFile2 ]; then
echo "Unable to find or open file: $inputFile2"
exit 1
fi
echo "Reading...$inputFile2"
;;
o) outputFile="$OPTARG"
echo "Writing...$outputFile"
;;
[?]) echo $usage >&2
exit 1
;;
esac
done
shift $(($OPTIND-1))
#First read the files and cut each line into an array
echo -e
echo "---Reading Files---"
while read LINE
do
a1[$index1]="$LINE"
index1=$(($index1+1))
countA1=$(($countA1+1))
done < $inputFile1
echo "Read $countA1 lines in $inputFile1"
while read LINE
do
a2[$index2]="$LINE"
index2=$(($index2+1))
countA2=$(($countA2+1))
done < $inputFile2
echo "Read $countA2 lines in $inputFile2"
#Then cut each item out of the array and store it into separate variables
echo -e
echo "Searching for Matches..."
for (( j=0; j<${#a2[#]}; j++ ))
do
hash2=$(echo ${a2[$j]} | cut -d: -f1)
pword=$(echo ${a2[$j]} | cut -d: -f2)
for (( i=0; i<${#a1[#]}; i++ ))
do
us=$(echo ${a1[$i]} | cut -d: -f1)
hash1=$(echo ${a1[$i]} | cut -d: -f2)
if [ "$hash2" = "$hash1" ]; then
matchCount=$(($matchCount+1))
o["$us"]=$pword
echo -e "Match Found[$matchCount]: \t Username:$us \t Password:$pword"
break
fi
done
done
echo -e "Matches Found: $matchCount\n" >> $outputFile
for k in ${!o[#]}
do
echo -e "Username: $k \t Password: ${o[$k]}" >> $outputFile
done
echo -e "\nWrote $matchCount lines to $outputFile"
unset o
Related
I'm writing a POSIX compliant script in dash so I am having to get creative with using fake arrays.
Contents of fake_array.sh
fake_array_job() {
array="$1"
job_name="$2"
comma_count="$(echo "$array" | grep -o -F ',' | wc -l)"
if [ "$comma_count" -lt '1' ]; then
echo 'You gave a fake array to fake_array_job that does not contain at least one comma. Exiting...'
exit
fi
array_count="$(( comma_count + 1 ))"
position=1
while [ "$position" -le "$array_count" ]; do
item="$(echo "$array" | cut -d ',' -f "$position")"
"$job_name" || exit
position="$(( position + 1 ))"
done
}
Contents of script.sh
#!/bin/sh
. fake_array.sh
job_to_do() {
echo "$item"
}
fake_array_job 'goat,pig,sheep' 'job_to_do'
second_job() {
echo "$item"
}
fake_array_job 'apple,orange' 'second_job'
I am aware that it may seem silly to use a unique name for each job I pass to fake_array_job, but I like that I have to type it twice because it helps to reduce human error.
I keep reading that it is a bad idea to use a variable as a command. Does my use of "$job_name" to run a function have any negative implications as it concerns stability, security or efficiency?
(Read to the end for a good suggestion by Charles Duffy. I'm too lazy to completely rewrite my answer to mention it earlier...)
You can iterate over the "array" using simple parameter expansions without requiring multiple elements in the array.
fake_array_job() {
args=${1%,}, # Ensure the array ends with a comma
job_name=$2
while [ -n "$args" ]; do
item=${args%%,*}
"$job_name" || exit
args=${args#*,}
done
}
One problem with the above is that assures that the array is comma-terminated by assuming that foo,bar, is not a comma-delimited array with an empty last element. A better (though uglier) solution is to use read to break up the array.
fake_array_job () {
args=$1
job_name=$2
rest=$args
while [ -n "$rest" ]; do
IFS=, read -r item rest <<EOF
$rest
EOF
"$job_name" || exit
done
}
(You can use <<-EOF and make sure the here doc is indented with tabs, but it's hard to convey that here, so I'll just leave the ugly version.)
There's also Charles Duffy's good suggestion of using case to pattern match on the array to see if there are any commas left or not:
while [ -n "$args" ]; do
case $var in
*,*) next=${args%%,*}; var=${args#*,}; "$cmd" "$next";;
*) "$cmd" "$var"; break;;
esac;
done
So I have a file called "nouns" that looks like this:
English word:matching Spanish word
Englsih word:matching Spanish word
..etc etc
I need to make a program that list all the English words with an option to quit. The program displays the English words and ask the user for the word he wants translated and he can also type "quit" to exit.
This is what I have so far that shows me the list in English
select english in $(cut -d: -f1 nouns)
do
if [ "$english" = 'quit' ]
then
exit 0
fi
done
I know that I need to run a command that pulls up the second column (-f2) by searching for the corresponding English word like this
result=$(grep -w $english nouns|cut -d: -f2)
My end result should just out put the corresponding Spanish word. I am just not sure how to get all the parts to fit together. I know its based in a type of "if" format (I think) but do I start a separate if statement for the grep line?
Thanks
You need a loop in which you ask for input from user. The rest is putting things together with the correct control flow. See my code below:
while :
do
read -p "Enter word (or quit): " input
if [ "$input" = "quit" ]; then
echo "exiting ..."
break
else
echo "searching..."
result=$(grep $input nouns | cut -d ':' -f 2)
if [[ $result ]]; then
echo "$result"
else
echo "not found"
fi
fi
done
dfile=./dict
declare -A dict
while IFS=: read -r en es; do
dict[$en]=$es
done < "$dfile"
PS3="Select word>"
select ans in "${!dict[#]}" "quit program"; do
case "$REPLY" in
[0-9]*) w=$ans;;
*) w=$REPLY;;
esac
case "$w" in
quit*) exit 0;;
*) echo "${dict[$w]}" ;;
esac
done
You want to run this in a constant while loop, only breaking the loop if the user enters "quit." Get the input from the user using read to put it in a variable. As for the searching, this can be done pretty easily with awk (which is designed to work with delimited files like this) or grep.
#!/bin/sh
while true; do
read -p "Enter english word: " word
if [ "$word" = "quit" ]; then
break
fi
# Take your pick, either of these will work:
# awk -F: -v "w=$word" '{if($1==w){print $2; exit}}' nouns
grep -Pom1 "(?<=^$word:).*" nouns
done
I'm trying to create a script to run a command and take that output and use it to create a menu dynamically. I also need to access parts of each output line for specific values.
I am using the command:
lsblk --nodeps -no name,serial,size | grep "sd"
output:
sda 600XXXXXXXXXXXXXXXXXXXXXXXXXX872 512G
sdb 600XXXXXXXXXXXXXXXXXXXXXXXXXXf34 127G
I need to create a menu that looks like:
Available Drives:
1) sda 600XXXXXXXXXXXXXXXXXXXXXXXXXX872 512G
2) sdb 600XXXXXXXXXXXXXXXXXXXXXXXXXXf34 127G
Please select a drive:
(note: there can be any number of drives, this menu would be constructed dynamically from the available drives array)
When the user selects the menu number I need to be able to access the drive id (sdb) and drive serial number (600XXXXXXXXXXXXXXXXXXXXXXXXXXf34) for the selected drive.
Any assistance would be greatly appreciated.
Please let me know if any clarification is needed.
#!/usr/bin/env bash
# Read command output line by line into array ${lines [#]}
# Bash 3.x: use the following instead:
# IFS=$'\n' read -d '' -ra lines < <(lsblk --nodeps -no name,serial,size | grep "sd")
readarray -t lines < <(lsblk --nodeps -no name,serial,size | grep "sd")
# Prompt the user to select one of the lines.
echo "Please select a drive:"
select choice in "${lines[#]}"; do
[[ -n $choice ]] || { echo "Invalid choice. Please try again." >&2; continue; }
break # valid choice was made; exit prompt.
done
# Split the chosen line into ID and serial number.
read -r id sn unused <<<"$choice"
echo "id: [$id]; s/n: [$sn]"
As for what you tried: using an unquoted command substitution ($(...)) inside an array constructor (( ... )) makes the tokens in the command's output subject to word splitting and globbing, which means that, by default, each whitespace-separated token becomes its own array element, and may expand to matching filenames.
Filling arrays in this manner is fragile, and even though you can fix that by setting IFS and turning off globbing (set -f), the better approach is to use readarray -t (Bash v4+) or IFS=$'\n' read -d '' -ra (Bash v3.x) with a process substitution to fill an array with the (unmodified) lines output by a command.
I managed to untangle the issue in an elegant way:
#!/bin/bash
# Dynamic Menu Function
createmenu () {
select selected_option; do # in "$#" is the default
if [ 1 -le "$REPLY" ] && [ "$REPLY" -le $(($#)) ]; then
break;
else
echo "Please make a vaild selection (1-$#)."
fi
done
}
declare -a drives=();
# Load Menu by Line of Returned Command
mapfile -t drives < <(lsblk --nodeps -o name,serial,size | grep "sd");
# Display Menu and Prompt for Input
echo "Available Drives (Please select one):";
createmenu "${drives[#]}"
# Split Selected Option into Array and Display
drive=($(echo "${selected_option}"));
echo "Drive Id: ${drive[0]}";
echo "Serial Number: ${drive[1]}";
How about something like the following
#!/bin/bash
# define an array
declare -a obj
# capture the current IFS
cIFS=$IFS
# change IFS to something else
IFS=:
# assign & format output from lsblk
obj=( $(lsblk --nodeps --no name,serial,size) )
# generate a menu system
select item from ${obj[#]}; do
if [ -n ${item} ]; then
echo "Invalid selection"
continue
else
selection=${item}
break
fi
done
# reset the IFS
IFS=${cIFS}
That should be a bit more portable with less dependencies such as readarray which isn't available on some systems
I'm using bash on cygwin.
I have to take a .csv file that is a subset of a much larger set of settings and shuffle the new csv settings (same keys, different values) into the 1000-plus-line original, making a new .json file.
I have put together a script to automate this. The first step in the process is to "clean up" the csv file by extracting lines that start with "mme " and "sms ". Everything else is to pass through cleanly to the "clean" .csv file.
This routine is as follows:
# clean up the settings, throwing out mme and sms entries
cat extract.csv | while read -r LINE; do
if [[ $LINE == "mme "* ]]
then
printf "$LINE\n" >> mme_settings.csv
elif [[ $LINE == "sms "* ]]
then
printf "$LINE\n" >> sms_settings.csv
else
printf "$LINE\n" >> extract_clean.csv
fi
done
My problem is that this thing stubs its toe on the following string at the end of one entry: 100%." When it's done with the line, it simply elides the %." and the new-line marker following it, and smears the two lines together:
... 100next.entry.keyname...
I would love to reach in and simply manually delimit the % sign, but it's not a realistic option for my use case. Clearly I'm missing something. My suspicion is that I am in some wise abusing cat or read in the first line.
If there is some place I should have looked to find the answer before bugging you all, by all means point me in that direction and I'll sod off.
Syntax for printf is :
printf format [argument]...
In [ printf ] format string, anything followed by % is a format specifier as described in the link above. What you would like to do is :
while read -r line; do # Replaced LINE with line, full uppercase variable are reserved for the syste,
if [[ "$line" = "mme "* ]] # Here* would glob for anything that comes next
then
printf "%s\n" $line >> mme_settings.csv
elif [[ "$line" = "sms "* ]]
then
printf "%s\n" $line >> sms_settings.csv
else
printf "%s\n" $line >> extract_clean.csv
fi
done<extract.csv # Avoided the useless use of cat
As pointed out, your problem is expanding a parameter containing a formatting instruction in the formatting argument of printf, which can be solved by using echo instead or moving the parameter to be expanded out of the formatting string, as demonstrated in other answers.
I recommend not looping over your whole file with Bash in the first place, as it's notoriously slow; you're extracting lines starting with certain patterns, which is a job at which grep excels:
grep '^mme ' extract.csv > mme_settings.csv
grep '^sms ' extract.csv > sms_settings.csv
grep -v '^mme \|^sms ' extract.csv > extract_clean.csv
The third command uses the -v option (extract lines that don't match) and alternation to exclude lines both starting with mme and sms.
I'm attempting to read a config file that is formatted as follows:
USER = username
TARGET = arrows
I realize that if I got rid of the spaces, I could simply source the config file, but for security reasons I'm trying to avoid that. I know there is a way to read the config file line by line. I think the process is something like:
Read lines into an array
Filter out all of the lines that start with #
search for the variable names in the array
After that I'm lost. Any and all help would be greatly appreciated. I've tried something like this with no success:
backup2.config>cat ~/1
grep '^[^#].*' | while read one two;do
echo $two
done
I pulled that from a forum post I found, just not sure how to modify it to fit my needs since I'm so new to shell scripting.
http://www.linuxquestions.org/questions/programming-9/bash-shell-program-read-a-configuration-file-276852/
Would it be possible to automatically assign a variable by looping through both arrays?
for (( i = 0 ; i < ${#VALUE[#]} ; i++ ))
do
"${NAME[i]}"=VALUE[i]
done
echo $USER
Such that calling $USER would output "username"? The above code isn't working but I know the solution is something similar to that.
The following script iterates over each line in your input file (vars in my case) and does a pattern match against =. If the equal sign is found it will use Parameter Expansion to parse out the variable name from the value. It then stores each part in it's own array, name and value respectively.
#!/bin/bash
i=0
while read line; do
if [[ "$line" =~ ^[^#]*= ]]; then
name[i]=${line%% =*}
value[i]=${line#*= }
((i++))
fi
done < vars
echo "total array elements: ${#name[#]}"
echo "name[0]: ${name[0]}"
echo "value[0]: ${value[0]}"
echo "name[1]: ${name[1]}"
echo "value[1]: ${value[1]}"
echo "name array: ${name[#]}"
echo "value array: ${value[#]}"
Input
$ cat vars
sdf
USER = username
TARGET = arrows
asdf
as23
Output
$ ./varscript
total array elements: 2
name[0]: USER
value[0]: username
name[1]: TARGET
value[1]: arrows
name array: USER TARGET
value array: username arrows
First, USER is a shell environment variable, so it might be better if you used something else. Using lowercase or mixed case variable names is a way to avoid name collisions.
#!/bin/bash
configfile="/path/to/file"
shopt -s extglob
while IFS='= ' read lhs rhs
do
if [[ $lhs != *( )#* ]]
then
# you can test for variables to accept or other conditions here
declare $lhs=$rhs
fi
done < "$configfile"
This sets the vars in your file to the value associated with it.
echo "Username: $USER, Target: $TARGET"
would output
Username: username, Target: arrows
Another way to do this using keys and values is with an associative array:
Add this line before the while loop:
declare -A settings
Remove the declare line inside the while loop and replace it with:
settings[$lhs]=$rhs
Then:
# set keys
user=USER
target=TARGET
# access values
echo "Username: ${settings[$user]}, Target: ${settings[$target]}"
would output
Username: username, Target: arrows
I have a script which only takes a very limited number of settings, and processes them one at a time, so I've adapted SiegeX's answer to whitelist the settings I care about and act on them as it comes to them.
I've also removed the requirement for spaces around the = in favour of ignoring any that exist using the trim function from another answer.
function trim()
{
local var=$1;
var="${var#"${var%%[![:space:]]*}"}"; # remove leading whitespace characters
var="${var%"${var##*[![:space:]]}"}"; # remove trailing whitespace characters
echo -n "$var";
}
while read line; do
if [[ "$line" =~ ^[^#]*= ]]; then
setting_name=$(trim "${line%%=*}");
setting_value=$(trim "${line#*=}");
case "$setting_name" in
max_foos)
prune_foos $setting_value;
;;
max_bars)
prune_bars $setting_value;
;;
*)
echo "Unrecognised setting: $setting_name";
;;
esac;
fi
done <"$config_file";
Thanks SiegeX. I think the later updates you mentioned does not reflect in this URL.
I had to edit the regex to remove the quotes to get it working. With quotes, array returned is empty.
i=0
while read line; do
if [[ "$line" =~ ^[^#]*= ]]; then
name[i]=${line%% =*}
value[i]=${line##*= }
((i++))
fi
done < vars
A still better version is .
i=0
while read line; do
if [[ "$line" =~ ^[^#]*= ]]; then
name[i]=`echo $line | cut -d'=' -f 1`
value[i]=`echo $line | cut -d'=' -f 2`
((i++))
fi
done < vars
The first version is seen to have issues if there is no space before and after "=" in the config file. Also if the value is missing, i see that the name and value are populated as same. The second version does not have any of these. In addition it trims out unwanted leading and trailing spaces.
This version reads values that can have = within it. Earlier version splits at first occurance of =.
i=0
while read line; do
if [[ "$line" =~ ^[^#]*= ]]; then
name[i]=`echo $line | cut -d'=' -f 1`
value[i]=`echo $line | cut -d'=' -f 2-`
((i++))
fi
done < vars