AFL-Fuzz - Odd, Check Syntax! - How to add command line arguments to binary? - binaryfiles

I am attempting to fuzz a proprietary binary with no source code that accepts a config file. So the typical use case would be:
./File --config file.config
The config is a bunch of different parameters that are required to run the rest of the program, and runs fine if I run it by itself. Additionally, the config file is within the input directory.
I am attempting to fuzz it utilizing the following command with AFL:
./afl-fuzz -Q -i input/ -o output/ -m 400 ./File --configfile
However, once I run the command, everything looks fine, but as soon as I get to the first iteration of 'havoc', I get an 'odd, check syntax!' error. If I add a ## at the end, the afl will give me a timeout error. I'm assuming that once afl-fuzz starts to mutate that input file, it breaks the binary, but I'm not sure and I'm not sure what else to try - any ideas? Thanks!

Related

Run an executable and read logs at the same time

I have a scenario where I run an executable (as an entrypoint) from a docker container.
The problem is, that executable doesn't write logs to stdout, but to a file.
I need a way to run that executable in the foreground (so that if it crashes, it crashes the container as well), but pipe logs from a file to stdout at the same time.
Any suggestion on how to do that?
The Linux environment provides a couple of special files that actually relay to other file descriptors. If you set the log file to /dev/stdout or /dev/fd/1, it will actually appear on the main process's stdout.
The Docker Hub nginx image has a neat variation on this. If you look at its Dockerfile it specifies:
RUN ln -sf /dev/stdout /var/log/nginx/access.log
The Nginx application configuration specifies its log file as /var/log/nginx/access.log. If you do nothing, that is a symlink to /dev/stdout, and so access logs appear in the docker logs output. But if you'd prefer to have the logs in files, you can bind-mount a host directory on /var/log/nginx and you'll get access.log on the host as a file instead.
G'day areller!
Well, your question is quite generic/abstract (as David Maze mentioned, you didn't said what are the exact commands neither how are you running them), but i think i got it.
You will do the following:
# if the command logs to the stdout, do:
command &> /var/tmp/command.log &
tail -f /var/tmp/command.log
# if the command logs to an specific file in, idk, /var/adm, do:
tail -f /specific/file/directory/command.log
Instead of explaining tail -f by myself, i will take a citation from the tail(1p) manual page:
-f If the input file is a regular file or if the file operand specifies a FIFO, do not terminate after the last line of the input file
has been copied, but read and copy further bytes from the input file when they become available. If no file operand is specified and
standard input is a pipe, the -f option shall be ignored. If the input file is not a FIFO, pipe, or regular file, it is unspecified
whether or not the -f option shall be ignored.
I hope i've helped you.

Get log entries written while executing command

I have a service that writes to a file in /var/log. For testing purposes, I am looking for a way to extract just the log lines that are written while executing a command against the service. I know I could do it with a C program using fseek/ftell, but that would require extra tooling in the VM. I would prefer a pure bash solution (bash 4.4, Ubuntu 18.04). I thought maybe something about using tail -f might work, but I can't figure out exactly how to work that.
You can use diff command. It takes 2 files as input and prints differing lines. You can copy the log file before execution of the service and compare it to the original file afterwords.
$ cat > logfile
line 1
line 2 asdf
$ cp logfile logfile-old
$ cat >> logfile
Third one.
Oups. Error occured.
$ diff logfile logfile-old
3,4d2
< Third one.
< Oups. Error occured.

bash commands to remote hosts - errors with writing local output files

I'm trying to run several sets of commands in parallel on a few remote hosts.
I've created a script that constructs these commands, and then writes the output in a local file, something along the lines of:
ssh <me>#<ip1> "command" 2> ./path/to/file/newFile1.txt & ssh <me>#<ip2>
"command" 2> ./path/to/file/newFile2.txt & ssh <me>#<ip2> "command" 2>
./path/to/file/newFile3.txt; ...(same repeats itself, with new commands and new
file names)...
My issue is that, when my script runs these commands, I am getting the following errors:
bash: ./path/to/file/newFile1.txt: No such file or directory
bash: ./path/to/file/newFile2.txt: No such file or directory
bash: ./path/to/file/newFile3.txt: No such file or directory
...
These files do NOT exist but will be written. That being said, the directory paths are valid.
The strange thing is that, if I copy and paste the whole big command, then it works without any issue. I'd rather have it automated tho ;).
Any ideas?
Edit - more information:
My filesystem is the following:
- home
- User
- Desktop
- Servers
- Outputs
- ...
I am running the bash script from home/User/Desktop/Servers.
The script creates the commands that need to be run on the remote servers. First thing first, the script creates the directories where the files will be stored.
outputFolder="./Outputs"
...
mkdir -p ${outputFolder}/f{fileNumb}
...
The script then continues to create the commands that will be called on remotes hosts, and their respective outputs will be placed in the created directories.
The directories are there. Running the commands gives me the errors, however printing and then copying the commands into the same location works for some reason. I have also tried to give the full path to directory, still same issue.
Hope I've been a bit clearer.
If this is the exact error message you get:
bash: ./path/to/file/newFile1.txt: No such file or directory
Then you'll note that there's an extra space between the colon and the dot, so it's actually trying to open a file called " ./path/to/file/newFile1.txt" (without the quotes).
However, to accomplish that, you'd need to use quotes around the filename in the redirection, as in
something ... 2> " ./path/to/file/newFile1.txt"
Or the first character would have to something else than a regular space. A non-breaking space perhaps, possible something that some editor might create if you hit alt-space or such.
I don't believe you've shown enough to correctly answer the question.
This doesn't look like a problem with ssh, but the way you are calling the (ssh) commands.
You say that you are writing the commands into a file... presumably you are then running that file as a script. Could you show the code you use to do that. I believe that's your problem.
I suspect you have made a false assumption about the way the working directory changes when you run a script. It doesn't. You are listing relative paths, so its important to know what they are relative to. That is the most likely reason for it working when you copy and paste it... You are executing from a different working directory.
I am new to bash scripting and was building my script based on another one I had seen. I was "running" the command by simply calling the variable where the command was stored:
$cmd
Solved by using:
eval $cmd
instead. My bad, should have given the full script from the start.

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!

Help with aliases in shell scripts

I have the following code, which is intended to run a java program on some input, and test that input against a results file for verification.
#!/bin/bash
java Program ../tests/test"$#".tst > test"$#".asm
spim -f test"$#".asm > temp
diff temp ../results/test"$#".out
The gist of the above code is to:
Run Program on a test file in another directory, and pipe the output into an assembly file.
Run a MIPS processor on that program's output, piping that into a file called temp.
Run diff on the output I generated and some expected output.
I made this shell script to help me automate checking of my homework assignment for class. I didn't feel like manually checking things anymore.
I must be doing something wrong, as although this program works with one argument, it fails with more than one. The output I get if I use the $# is:
./test.sh: line 2: test"$#".asm: ambiguous redirect
Cannot open file: `test0'
EDIT:
Ah, I figured it out. This code fixed the problem:
#!/bin/bash
for arg in $#
do
java Parser ../tests/test"$arg".tst > test"$arg".asm
spim -f test"$arg".asm > temp
diff temp ../results/test"$arg".out
done
It turns out that bash must have interpreted a different cmd arg for each time I was invoking $#.
enter code here
If you provide multiple command-line arguments, then clearly $# will expand to a list of multiple arguments, which means that all your commands will be nonsense.
What do you expect to happen for multiple arguments?

Resources