Convert hex to binary and send it over network - bash

I need to read hexadecimal data from stdin, convert it to binary, send with netcat, recieve reply, convert back to hex and print to stdout. I do:
# xxd -r -p | nc -u localhost 12345 | xxd
Then type my data in hex and press Enter. But it is not sent untill I press Ctrl+D, so I'm unable to sent another packet after receiving reply. Looks like xxd -r -p doesn't write binary data, until EOF is given. Is there a way to make it write after newline?

By default, most *nix utilities will do line buffering when in interactive mode (e.g. stdin/stdout connected directly to the terminal emulator). But when in non-interactive mode (e.g. stdin/stdout connected to a pipe) larger buffers are typically used - I think 8k or so is typical, but this varies largely by implementation/distro.
You can force buffering for a given process to line mode using the GNU stdbuf utility, if you have it available:
stdbuf -oL xxd -r -p | nc -u localhost 12345 | xxd

Related

Convert array of bytes to base64 string using bash [duplicate]

cat /dev/urandom is always a fun way to create scrolling characters on your display, but produces too many non-printable characters.
Is there an easy way to encode it on the command-line in such a way that all of its output are readable characters, base64 or uuencode for example.
Note that I prefer solutions that require no additional files to be created.
What about something like
cat /dev/urandom | base64
Which gives (lots of) stuff like
hX6VYoTG6n+suzKhPl35rI+Bsef8FwVKDYlzEJ2i5HLKa38SLLrE9bW9jViSR1PJGsDmNOEgWu+6
HdYm9SsRDcvDlZAdMXAiHBmq6BZXnj0w87YbdMnB0e2fyUY6ZkiHw+A0oNWCnJLME9/6vJUGsnPL
TEw4YI0fX5ZUvItt0skSSmI5EhaZn09gWEBKRjXVoGCOWVlXbOURkOcbemhsF1pGsRE2WKiOSvsr
Xj/5swkAA5csea1TW5mQ1qe7GBls6QBYapkxEMmJxXvatxFWjHVT3lKV0YVR3SI2CxOBePUgWxiL
ZkQccl+PGBWmkD7vW62bu1Lkp8edf7R/E653pi+e4WjLkN2wKl1uBbRroFsT71NzNBalvR/ZkFaa
2I04koI49ijYuqNojN5PoutNAVijyJDA9xMn1Z5UTdUB7LNerWiU64fUl+cgCC1g+nU2IOH7MEbv
gT0Mr5V+XAeLJUJSkFmxqg75U+mnUkpFF2dJiWivjvnuFO+khdjbVYNMD11n4fCQvN9AywzH23uo
03iOY1uv27ENeBfieFxiRwFfEkPDgTyIL3W6zgL0MEvxetk5kc0EJTlhvin7PwD/BtosN2dlfPvw
cjTKbdf43fru+WnFknH4cQq1LzN/foZqp+4FmoLjCvda21+Ckediz5mOhl0Gzuof8AuDFvReF5OU
Or, without the (useless) cat+pipe :
base64 /dev/urandom
(Same kind of output ^^ )
EDIT : you can also user the --wrap option of base64, to avoid having "short lines" :
base64 --wrap=0 /dev/urandom
This will remove wrapping, and you'll get "full-screen" display ^^
A number of folks have suggested catting and piping through base64 or uuencode. One issue with this is that you can't control how much data to read (it will continue forever, or until you hit ctrl+c). Another possibility is to use the dd command, which will let you specify how much data to read before exiting. For example, to read 1kb:
dd if=/dev/urandom bs=1k count=1 2>/dev/null | base64
Another option is to pipe to the strings command which may give more variety in its output (non-printable characters are discarded, any runs of least 4 printable characters [by default] are displayed). The problem with strings is that it displays each "run" on its own line.
dd if=/dev/urandom bs=1k count=1 2>/dev/null | strings
(of course you can replace the entire command with
strings /dev/urandom
if you don't want it to ever stop).
If you want something really funky, try one of:
cat -v /dev/urandom
dd if=/dev/urandom bs=1k count=1 2>/dev/null | cat -v
So, what is wrong with
cat /dev/urandom | uuencode -
?
Fixed after the first attempt didn't actually work... ::sigh::
BTW-- Many unix utilities use '-' in place of a filename to mean "use the standard input".
There are already several good answers on how to base64 encode random data (i.e. cat /dev/urandom | base64). However in the body of your question you elaborate:
... encode [urandom] on the command-line in such a way that all of it's output are readable characters, base64 or uuencode for example.
Given that you don't actually require parseable base64 and just want it to be readable, I'd suggest
cat /dev/urandom | tr -dC '[:graph:]'
base64 only outputs alphanumeric characters and two symbols (+ and / by default). [:graph:] will match any printable non-whitespace ascii, including many symbols/punctuation-marks that base64 lacks. Therefore using tr -dC '[:graph:]' will result in a more random-looking output, and have better input/output efficiency.
I often use < /dev/random stdbuf -o0 tr -Cd '[:graph:]' | stdbuf -o0 head --bytes 32 for generating strong passwords.
You can do more interesting stuff with BASH's FIFO pipes:
uuencode <(head -c 200 /dev/urandom | base64 | gzip)
cat /dev/urandom | tr -dc 'a-zA-Z0-9'
Try
xxd -ps /dev/urandom
xxd(1)

Nothing prints after piping ping through two commands

Running this:
ping google.com | grep -o 'PING'
Will print PING to the terminal, so I assume that means that the stdout of grep was captured by the terminal.
So why doesn't the follow command print anything? The terminal just hangs:
ping google.com | grep -o 'PING' | grep -o 'IN'
I would think that the stdout of the first grep command would be redirected to the stdin of the second grep. Then the stdout of the second grep would be captured by the terminal and printed.
This seems to be what happens if ping is replaced with echo:
echo 'PING' | grep -o 'PING' | grep -o 'IN'
IN is printed to the terminal, as I would expect.
So what's special about ping that prevents anything from being printed?
You could try being more patient :-)
ping google.com | grep -o 'PING' | grep -o 'IN'
will eventually display output, but it might take half an hour or so.
Under Unix, the standard output stream handed to a program when it starts up is "line-buffered" if the stream is a terminal; otherwise it is fully buffered, typically with a buffer of 8 kilobytes (8,192 characters). Buffering means that output is accumulated in memory until the buffer is full, or, in the case of line-buffered streams, until a newline character is sent.
Of course, a program can override this setting, and programs which produce only small amounts of output -- like ping -- typically make stdout line-buffered regardless of what it is. But grep does not do so (although you can tell Gnu grep to do that by using the --line-buffered command-line option.)
"Pipes" (which are created to implement the | operator) are not considered terminals. So the grep in the middle will have a fully-buffered output, meaning that its output will be buffered until 8k characters are written. That will take a while in your case, because each line contains only five characters (PING plus a newline), and they are produced once a aecond. So the buffer will fill up after about 1640 seconds, which is almost 28 minutes.
Many unix distributions come with a program called stdbuf which can be used to change buffering for standard streams before running a program. (If you have stdbuf, you can find out how it works by typing man 1 stdbuf.) Programming languages like Perl generally provide other mechanisms to call the stdbuf standard library function. (In Perl, you can force a flush after every write using the builtin variable $|, or the autoflush(BOOL) io handle method.)
Of course, when a program successfully terminates, all output buffers are "flushed" (srnt to their respective streams). So
echo PING | grep -o 'PING' | grep -o 'IN'
will immediately output its only output line. But ping does not terminate unless you provide a count command-line option (-c N; see man ping). So if you need immediate piped throughput, you may need to modify buffering behaviour.

bash - Type characters in hexadecimal notation to standard input

I'd like to write non-ASCII characters (0xfe, 0xed, etc) to a program's standard input.
There are a lot of similar questions to this one, but I didn't find an answer, because:
I want to write single-byte characters, not unicode characters
I can't pipe the output of echo or something
On OS X¹ you can test with:
nm - -
I'd like to write object files magic bytes (e.g. 0xfeedface) to nm using standard input so I can see how it does behave and I can recode it.
If I use a pipe then the second argument -, which means stdin, will never match any bytes since all the standard input will go to the first one. When using a terminal instead of a pipe, I can type Ctrl + D so the first one gets 'closed' and the second one start reading.
I tried with Ctrl + Shift + U and the Unicode Hex Input of OS X but it doesn't work -- I can't write the desired characters with it.
I also tried with the clipboard with pbcopy but it fails to read/paste non-ASCII or non-unicode characters.
How can I achieve my goal?
Don't hesitate to edit as this was a difficult question to express.
¹ The nm on linux does not handle stdin.
You can echo your desired hex code into a file.
echo -e -n '\xde\xad\xbe\xef\xde\xad\xbe\xef' >/tmp/yo
or
echo -en \\xde\\xad\\xbe\\xef\\xde\\xad\\xbe\\xef >/tmp/yo
and make your executable to read from this file instead of stdin
./executable </tmp/yo
If you don't wan't to create a file, here's an alternative
python -c 'print("\x61\x62\x63\x64")' | /path/to/exe
If you want stdin control to be transferred back (in case if you're trying to execute an interactive shell --> we need a subshell to keep sending inputs to stdin. Otherwise, after the first input, the executable would exit as it is not going to get anything further on stdin)
( python -c 'print("\x61\x62\x63\x64")' ; cat ) | /path/to/exe
Python does some juggling with the bytes. So, incase of Python3, you'll have to do the following :
( python -c 'import sys; sys.stdout.buffer.write(b"\x61\x62\x63\x64")' ; cat ) | /path/to/exe
This answer helped me :
https://reverseengineering.stackexchange.com/questions/13928/managing-inputs-for-payload-injection
Try a util like xxd:
# echo hex 52 to pipe, convert it to binary, which goes to stdout
echo 52 | xxd -r ; echo
R
Or for a more specialized util try ascii2binary (default input is decimal):
# echo dec 52 to pipe, convert it to binary, which goes to stdout
echo 52 | ascii2binary ; echo
4
# echo base11 52 to pipe, convert it to binary, which goes to stdout
echo 52 | ascii2binary -b 11 ; echo
9
Or dump a series of hex chars, showing what hexdump sees:
echo 7 ef 52 ed 19 | ascii2binary -b h | \
hexdump -v -e '/1 "%_ad# "' -e '/1 " _%_u\_\n"'
0# _bel_
1# _ef_
2# _R_
3# _ed_
4# _em_
See man xxd ascii2binary for the various tricks these utils can do.
Using bash, you can echo the input,
echo -e -n '\x61\x62\x63\x64' | /path/someFile.sh --nox11
or use cat, which might be more comfortable when there are several lines of prompting:
cat $file | /path/someFile.sh --nox11
You can omit the --nox11, but that might help when the script spawns a new instance of terminal
Note: This will not work with /bin/sh!

Realtime removal of carriage return in shell

For context, I'm attempting to create a shell script that simplifies the realtime console output of ffmpeg, only displaying the current frame being encoded. My end goal is to use this information in some sort of progress indicator for batch processing.
For those unfamiliar with ffmpeg's output, it outputs encoded video information to stdout and console information to stderr. Also, when it actually gets to displaying encode information, it uses carriage returns to keep the console screen from filling up. This makes it impossible to simply use grep and awk to capture the appropriate line and frame information.
The first thing I've tried is replacing the carriage returns using tr:
$ ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'
This works in that it displays realtime output to the console. However, if I then pipe that information to grep or awk or anything else, tr's output is buffered and is no longer realtime. For example: $ ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'>log.txt results in a file that is immediately filled with some information, then 5-10 secs later, more lines get dropped into the log file.
At first I thought sed would be great for this: $ # ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | sed 's/\\r/\\n/', but it gets to the line with all the carriage returns and waits until the processing has finished before it attempts to do anything. I assume this is because sed works on a line-by-line basis and needs the whole line to have completed before it does anything else, and then it doesn't replace the carriage returns anyway. I've tried various different regex's for the carriage return and new line, and have yet to find a solution that replaces the carriage return. I'm running OSX 10.6.8, so I am using BSD sed, which might account for that.
I have also attempted to write the information to a log file and use tail -f to read it back, but I still run into the issue of replacing carriage returns in realtime.
I have seen that there are solutions for this in python and perl, however, I'm reluctant to go that route immediately. First, I don't know python or perl. Second, I have a completely functional batch processing shell application that I would need to either port or figure out how to integrate with python/perl. Probably not hard, but not what I want to get into unless I absolutely have to. So I'm looking for a shell solution, preferably bash, but any of the OSX shells would be fine.
And if what I want is simply not doable, well I guess I'll cross that bridge when I get there.
If it is only a matter of output buffering by the receiving application after the pipe. Then you could try using gawk (and some BSD awk) or mawk which can flush buffers. For example, try:
... | gawk '1;{fflush()}' RS='\r\n' > log.txt
Alternatively if you awk does not support this you could force this by repeatedly closing the output file and appending the next line...
... | awk '{sub(/\r$/,x); print>>f; close(f)}' f=log.out
Or you could just use shell, for example in bash:
... | while IFS= read -r line; do printf "%s\n" "${line%$'\r'}"; done > log.out
Libc uses line-buffering when stdout and stderr are connected to a terminal and full-buffering (with a 4KB buffer) when connected to a pipe. This happens in the process generating the output, not in the receiving process—it's ffmpeg's fault, in your case, not tr's.
unbuffer ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'
stdbuf -e0 -o0 ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'
Try using unbuffer or stdbuf to disable output buffering.
The buffering of data between processes in a pipe is controlled with some system limits, which is at least on my system (Fedora 17) not possible to modify:
$ ulimit -a | grep pipe
pipe size (512 bytes, -p) 8
$ ulimit -p 1
bash: ulimit: pipe size: cannot modify limit: Invalid argument
$
Although this buffering is mostly related to how much excess data the producer is allowed to produce before it is stopped if the consumer is not consuming at the same speed, it might also affect timing of delivery of smaller amounts of data (not quite sure of this).
That is the buffering of pipe data, and I do not think there is much to tweak here. However, the programs reading/writing the piped data might also buffer stdin/stdout data and this you want to avoid in your case.
Here is a perl script that should do the translation with minimal input buffering and no output buffering:
#!/usr/bin/perl
use strict;
use warnings;
use Term::ReadKey;
$ReadKeyTimeout = 10; # seconds
$| = 1; # OUTPUT_AUTOFLUSH
while( my $key = ReadKey($ReadKeyTimeout) ) {
if ($key eq "\r") {
print "\n";
next;
}
print $key;
}
However, as already pointer out, you should make sure that ffmpeg does not buffer its output if you want real-time response.

Sniffing and displaying TCP packets in UTF-8

I am trying to use tcpdump to display the content of tcp packets flowing on my network.
I have something like:
tcpdump -i wlan0 -l -A
The -A option displays the content as ASCII text, but my text seems to be UTF-8. Is there a way to display UTF-8 properly using tcpdump? Do you know any other tools which could help?
Many thanks
Make sure your terminal supports outputting UTF-8 and pipe the output to something which replaces non printable characters:
tcpdump -lnpi lo tcp port 80 -s 16000 -w - | tr -t '[^[:print:]]' ''
tcpdump -lnpi lo tcp port 80 -s 16000 -w - | strings -e S -n 1
If your terminal does not support UTF-8 you have to convert the output to a supported encoding . E.g.:
tcpdump -lnpi lo tcp port 80 -s 16000 -w - | tr -t '[^[:print:]]' '' | iconv -c -f utf-8 -t cp1251
-c option tells iconv to omit character which does not have valid representation in the target encoding.
tcpdump -i wlan0 -w packet.ppp
This command stores the packets in packet.ppp
After that open it in wireshark
wireshark packet.ppp
right click on the packet and then select Follow tcp packet
Then you can have available different formats to view the data in wireshark.
There are many options that you can explore to sniff packets.
Wireshark is the most useful sniffer and its available for free for all platforms. It has a feature rich GUI which will help you sniff packets and analyze protocols. It has many filters so that you can filter out unwanted packets and only look at packets that you are interested in.
Check out their webpage at: available for download for Windows and OS X
To dowload for Linux distros check out this link
If you prefer an alternate solution more on the lines of tcpdump you can also explore tcpflow which is definitely a good option to analyze packets. It also provides you an option to store the files for later analysis.
Check this link: tcpflow
Another option is Justsniffer
Which probably best addresses your problem and provides you with text mode logging and is customizable.

Resources