I try to encrypt a message using AES in Golang.
func main() {
key := "mysupersecretkey32bytecharacters"
plainText := "thisismyplaintextingolang"
fmt.Println("My Encryption")
byteCipherText := encrypt([]byte(key), []byte(plainText))
fmt.Println(byteCipherText)
}
func encrypt(key, plaintext []byte) []byte {
cphr, err := aes.NewCipher(key)
if err != nil {
panic(err)
}
ciphertext := make([]byte, len(plaintext))
cphr.Encrypt(ciphertext, plaintext)
return ciphertext
}
That function returns : [23 96 11 10 70 223 95 118 157 250 80 92 77 26 137 224 0 0 0 0 0 0 0 0 0]
In that result, there are only 16 non-zero byte values. This means that AES encryption in Go only encrypts 16 characters.
Is it possible to encrypts more than 16 characters in Go AES without using any mode in AES (like GCM, CBC, CFB, ..etc), just pure AES?
aes.NewCipher returns an instance of cipher.Block, which encrypts in blocks of 16 bytes (this is how pure AES works).
mode of operation literally determines how messages longer than 16 bytes are encrypted. The simplest one is ECB (a "no-op" mode), which simply repeats the encryption in blocks of 16 bytes using the same key. You can do the same using a simple for-loop, though keep in mind that ECB is not very secure.
This has nothing to do with go. AES is a block cypher that encrypts a block of 16 bytes. To encrypt a longer message there are several modes that can be used to achieve this.
Related
I am testing out the AES 256 CBC implementation in Golang (Go).
plaintext: {"key1": "value1", "key2": "value2"}
Because the plaintext is 36 B and needs to be a multiple of the block size (16 B) I pad it manually with 12 random bytes to 48 B.
I understand that this is not the most secure way of doing it, but I am just testing, I will find a better way for production setups.
Inputs:
plaintext: aaaaaaaaaaaa{"key1": "value1", "key2": "value2"}
AES 256 key: b8ae2fe8669c0401fb289e6ab6247924
AES IV: e0332fc2a9743e4f
The code excerpt extracted, but modified a bit, from here:
block, err := aes.NewCipher(key)
if err != nil {
fmt.Println("Error creating a new AES cipher by using your key!");
fmt.Println(err);
os.Exit(1);
}
ciphertext := make([]byte, aes.BlockSize+len(plaintext))
mode := cipher.NewCBCEncrypter(block, iv)
mode.CryptBlocks(ciphertext, plaintext)
fmt.Printf("%x\n", ciphertext)
fmt.Println("len(ciphertext):",len(ciphertext))
CipherText = PlainText + Block - (PlainText MOD Block)
This equation gives the length of the ciphertext for CBC.
So, the line ciphertext := make([]byte, aes.BlockSize+len(plaintext)) satisfies this requirement since my plaintext is always padded to be a multiple of the block size.
Problem:
With Go I get the following ciphertext:
caf8fe667f4087e1b67d8c9c57fcb1f56b368cafb4bfecbda1e481661ab7b93d87703fb140368d3034d5187c53861c7400000000000000000000000000000000
I always get 16 0x00 bytes at the end of my ciphertext, no matter the length of my plaintext.
If i do the same with an online AES calculator I get this ciphertext:
caf8fe667f4087e1b67d8c9c57fcb1f56b368cafb4bfecbda1e481661ab7b93d87703fb140368d3034d5187c53861c74ccd202bac41937be75731f23796f1516
The first 48 bytes caf8fe667f4087e1b67d8c9c57fcb1f56b368cafb4bfecbda1e481661ab7b93d87703fb140368d3034d5187c53861c74 are the same. But I am missing the last 16 bytes.
This says:
It is acceptable to pass a dst bigger than src, and in that case,
CryptBlocks will only update dst[:len(src)] and will not touch the
rest of dst.
But why is this the case ? The length of the ciphertext needs to be longer than the length of the plaintext and the online AES calculators prove that.
The ciphertext of the online tool results, if the plaintext:
aaaaaaaaaaaa{"key1": "value1", "key2": "value2"}
is padded with PKCS#7 and the posted key and IV are UTF8 encoded. Since the size of the plaintext (48 bytes) is already an integer multiple of the blocksize (16 bytes for AES), a full block is padded according to the rules of PKCS#7 padding, resulting in a 64 bytes plaintext and ciphertext.
It is not clear from the question which online tool was used, but the posted ciphertext can be reconstructed with any reliable encryption tool, e.g. CyberChef, s. this online calcualtion. CyberChef applies PKCS#7 padding for AES/CBC by default.
The posted code produces a different ciphertext because:
no PKCS#7 padding is applied. This makes the ciphertext one block shorter (i.e. the last block ccd202bac41937be75731f23796f1516 is missing).
a size of aes.BlockSize + len(plaintext) bytes is allocated for the ciphertext. This causes the allocated size to be too large by aes.BlockSize bytes (i.e. the ciphertext contains 16 0x00 values at the end).
Therefore, for the Go code to produce the same ciphertext as the online tool, 1. the PKCS#7 padding must be added and 2. a size of only len(plaintext) bytes must be allocated for the ciphertext.
The following code is a possible implementation (for PKCS#7 padding pkcs7pad is used):
import (
...
"github.com/zenazn/pkcs7pad"
)
...
key := []byte("b8ae2fe8669c0401fb289e6ab6247924")
iv := []byte("e0332fc2a9743e4f")
plaintext := []byte("aaaaaaaaaaaa{\"key1\": \"value1\", \"key2\": \"value2\"}")
plaintext = pkcs7pad.Pad(plaintext, aes.BlockSize) // 1. pad the plaintext with PKCS#7
block, err := aes.NewCipher(key)
if err != nil {
panic(err)
}
ciphertext := make([]byte, len(plaintext)) // 2. allocate len(plaintext)
mode := cipher.NewCBCEncrypter(block, iv)
mode.CryptBlocks(ciphertext, plaintext)
fmt.Printf("%x\n", ciphertext) // caf8fe667f4087e1b67d8c9c57fcb1f56b368cafb4bfecbda1e481661ab7b93d87703fb140368d3034d5187c53861c74ccd202bac41937be75731f23796f1516
Note that because of the PKCS#7 padding, explicit padding with a is no longer required.
The static IV used in the above code is a vulnerability as it leads to reuse of key/IV pairs, which is insecure. In practice, therefore, a random IV is usually generated for each encryption. The IV is not secret, is needed for decryption, and is typically concatenated with the ciphertext. On the decryption side, IV and ciphertext are separated and used for decryption.
Since the size of the IV corresponds to the blocksize, a size of aes.BlockSize + len(plaintext) must be allocated for the ciphertext, which is equal to the size in the original code. Possibly this is not accidental and was designed with a random IV in mind, but then not implemented consequently. A consequent implementation is:
import (
...
"crypto/rand"
"io"
"github.com/zenazn/pkcs7pad"
)
...
key := []byte("b8ae2fe8669c0401fb289e6ab6247924")
plaintext := []byte("{\"key1\": \"value1\", \"key2\": \"value2\"}")
plaintext = pkcs7pad.Pad(plaintext, aes.BlockSize)
block, err := aes.NewCipher(key)
if err != nil {
panic(err)
}
ciphertext := make([]byte, aes.BlockSize+len(plaintext))
iv := ciphertext[:aes.BlockSize]
_, err = io.ReadFull(rand.Reader, iv) // create a random IV
if err != nil {
panic(err)
}
mode := cipher.NewCBCEncrypter(block, iv)
mode.CryptBlocks(ciphertext[aes.BlockSize:], plaintext)
fmt.Printf("%x\n", ciphertext)
The first 16 bytes of the output correspond to the (random) IV and the rest to the actual ciphertext.
I'm not able to understand how message framing using fixed size prefix length header works.
It's said that a fixed size byte array is going to contain the length for message to be sent. But how would you define a fixed size byte array, specifically in Golang.
Say this is my message:
Hello
Its length is 5.
So if I want to send this through a tcp stream, to make sure I receive all the message on the other end I'd have to tell it how many bytes it should read.
A simple header would be length:message:
5:Hello // [53 58 104 101 108 108 111]
But if message length grows 10x each time, there are going to be more bytes. So header size is dynamic that way.
36:Hello, this is just a dumb question. // [51 54 58 72 101 108 108 111 44 32 116 104 105 115 32 105 115 32 106 117 115 116 32 97 32 100 117 109 98 32 113 117 101 115 116 105 111 110 46]
So here 36 takes 2 bytes.
One approach I come to think of is to consider a maximum message length for the protocol. Say 10KB = 10240 bytes. Then add leading 0's to message length. This way I'm sure that I'm going to have a fixed 5 bytes header.
Would this work for all cases?
If yes, what if I have a message more than 10KBs, should I split it into 2 messages?
If not, what are other solutions?
I want to implement the solutions in Golang.
UPDATE 1:
I read about Endians, although I wasn't able to understand what they do that causes fixed length bytes. But I found an example in python and tried to write it in go this way:
Client:
const maxLengthBytes = 8
conn, err := net.Dial("tcp", "127.0.0.1:9999")
if err != nil {
fmt.Println(err)
return
}
message := "Hello, this is just a dumb question"
bs := make([]byte, maxLengthBytes)
binary.LittleEndian.PutUint64(bs, uint64(len(text)))
bytes := append(bs, []byte(text)...)
conn.Write(bytes)
Server:
listener, err := net.ListenTCP("tcp", &net.TCPAddr{Port: 9999})
if err != nil {
fmt.Println(err)
return
}
for {
tcp, err := listener.AcceptTCP()
if err != nil {
fmt.Println(err)
continue
}
go Reader(tcp)
}
func Reader(conn *net.TCPConn) {
foundLength := false
messageLength := 0
for {
if !foundLength {
var b = make([]byte, maxLengthBytes)
read, err := conn.Read(b)
if err != nil {
fmt.Println(err)
continue
}
if read != 8 {
fmt.Println("invalid header")
continue
}
foundLength = true
messageLength = int(binary.LittleEndian.Uint64(b))
} else {
var message = make([]byte, messageLength)
read, err := conn.Read(message)
if err != nil {
fmt.Println(err)
continue
}
if read != messageLength {
fmt.Println("invalid data")
continue
}
fmt.Println("Received:", string(message))
foundLength = false
messageLength = 0
}
}
}
Please refer to my answer in this post
TCP client for Android: text is not received in full
Basically, you have to define how the data stored/formatted.
For example:
We store prefix length as int32 (4 bytes) with little endian. It's different from yours.
With your solution, the length is a string, it's hard to fix the length.
For your solution, you have to use fixed length string. For example: 10 characters, and add leading zero.
For your questions.
It doesn't work for all cases with just prefix length. It has its limitation, for example if we use int32 as the prefix length, the length of message must be less than Integer32.max, right?
Yes, we have to split or even combine (please refer my explanation in above link).
We have many ways to deal with length limitation if it's our concern (actually, almost application protocols has it maximum request size).
You could use one more bit to indicate, whether or not the message exceeds max length to resolve it, right?
I have a large int array that I want to persist on the filesystem. My understanding is the best way to store something like this is to use the gob package to convert it to a byte array and then to compress it with gzip.
When I need it again, I reverse the process. I am pretty sure I am storing it correctly, however recovering it is failing with EOF. Long story short, I have some example code below that demonstrates the issue. (playground link here https://play.golang.org/p/v4rGGeVkLNh).
I am not convinced gob is needed, however reading around it seems that its more efficient to store it as a byte array than an int array, but that may not be true. Thanks!
package main
import (
"bufio"
"bytes"
"compress/gzip"
"encoding/gob"
"fmt"
)
func main() {
arry := []int{1, 2, 3, 4, 5}
//now gob this
var indexBuffer bytes.Buffer
writer := bufio.NewWriter(&indexBuffer)
encoder := gob.NewEncoder(writer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
//now compress it
var compressionBuffer bytes.Buffer
compressor := gzip.NewWriter(&compressionBuffer)
compressor.Write(indexBuffer.Bytes())
defer compressor.Close()
//<--- I think all is good until here
//now decompress it
buf := bytes.NewBuffer(compressionBuffer.Bytes())
fmt.Println("byte array before unzipping: ", buf.Bytes())
if reader, err := gzip.NewReader(buf); err != nil {
fmt.Println("gzip failed ", err)
panic(err)
} else {
//now ungob it...
var intArray []int
decoder := gob.NewDecoder(reader)
defer reader.Close()
if err := decoder.Decode(&intArray); err != nil {
fmt.Println("gob failed ", err)
panic(err)
}
fmt.Println("final int Array content: ", intArray)
}
}
You are using bufio.Writer which–as its name implies–buffers bytes written to it. This means if you're using it, you have to flush it to make sure buffered data makes its way to the underlying writer:
writer := bufio.NewWriter(&indexBuffer)
encoder := gob.NewEncoder(writer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
if err := writer.Flush(); err != nil {
panic(err)
}
Although the use of bufio.Writer is completely unnecessary as you're already writing to an in-memory buffer (bytes.Buffer), so just skip that, and write directly to bytes.Buffer (and so you don't even have to flush):
var indexBuffer bytes.Buffer
encoder := gob.NewEncoder(&indexBuffer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
The next error is how you close the gzip stream:
defer compressor.Close()
This deferred closing will only happen when the enclosing function (the main() function) returns, not a second earlier. But by that time you already wanted to read the zipped data, but that might still sit in an internal cache of gzip.Writer, and not in compressionBuffer, so you obviously can't read the compressed data from compressionBuffer. Close the gzip stream without using defer:
if err := compressor.Close(); err != nil {
panic(err)
}
With these changes, you program runs and outputs (try it on the Go Playground):
byte array before unzipping: [31 139 8 0 0 0 0 0 0 255 226 249 223 200 196 200 244 191 137 129 145 133 129 129 243 127 19 3 43 19 11 27 7 23 32 0 0 255 255 110 125 126 12 23 0 0 0]
final int Array content: [1 2 3 4 5]
As a side note: buf := bytes.NewBuffer(compressionBuffer.Bytes()) – this buf is also completely unnecessary, you can just start decoding compressionBuffer itself, you can read data from it that was previously written to it.
As you might have noticed, the compressed data is much larger than the initial, compressed data. There are several reasons: both encoding/gob and compress/gzip streams have significant overhead, and they (may) only make input smaller on a larger scale (5 int numbers don't qualify to this).
Please check related question: Efficient Go serialization of struct to disk
For small arrays, you may also consider variable-length encoding, see binary.PutVarint().
Background
Im writing a few packages to communicate with the OpenVas vulnerability scanner - the scanner uses a few different propitiatory protocols to communicate - are all comprised of either xml or text strings sent over a unix socket or tcp connection (im using unix socket).
The issue I'm having is with the OTP protocol (OpenVas internal protocol which is not well documented)
I can run the following command using netcat and I will get a response back in under a second:
echo -en '< OTP/2.0 >\nCLIENT <|> NVT_INFO\n' | ncat -U
/var/run/openvassd.sock
This results in a fairly large response which looks like this in terminal:
< OTP/2.0 >
SERVER <|> NVT_INFO <|> 201802131248 <|> SERVER
SERVER <|> PREFERENCES <|>
cache_folder <|> /var/cache/openvas
include_folders <|> /var/lib/openvas/plugins
max_hosts <|> 30
//lots more here
So for example, I previously had some code like this for reading the response back:
func (c Client) read() ([]byte, error) {
// set up buffer to read in chunks
bufSize := 8096
resp := []byte{}
buf := make([]byte, bufSize)
for {
n, err := c.conn.Read(buf)
resp = append(resp, buf[:n]...)
if err != nil {
if err != io.EOF {
return resp, fmt.Errorf("read error: %s", err)
}
break
}
fmt.Println("got", n, "bytes.")
}
fmt.Println("total response size:", len(resp))
return resp, nil
}
I get the full result but it comes in small pieces (i guess line by line) so the output I see is something like this (over the course of a minute or so before showing full response):
got 53 bytes.
got 62 bytes.
got 55 bytes.
got 62 bytes.
got 64 bytes.
got 59 bytes.
got 58 bytes.
got 54 bytes.
got 54 bytes.
got 54 bytes.
got 64 bytes.
got 59 bytes.
... (more)
SO I decided to try ioutil.ReadAll:
func (c Client) read() ([]byte, error) {
fmt.Println("read start")
d, err := ioutil.ReadAll(c.conn)
fmt.Println("read done")
return d, err
}
This does again return the full response, but the time between "read start" and "read done" is around a minute compared to the < 1sec the command is expected to take.
Any thoughts on why the read via golang is so slow compared to netcat - how can I diagnose/fix the issue?**
It appears the service is waiting for more input, and eventually times out after a minute. In your CLI example, once the echo command completes that side of the pipe is shutdown for writes, in which case the service is notified by a 0-length recv.
In order to do the same in Go, you need to call CloseWrite on the net.UnixConn after you have completed sending the command.
c.conn.(*net.UnixConn).CloseWrite()
I am using this library for sessions.
https://github.com/codegangsta/martini-contrib/tree/master/sessions
It says that:
It is recommended to use an authentication key with 32 or 64 bytes. The encryption key, if set, must be either 16, 24, or 32 bytes to select AES-128, AES-192, or AES-256 modes.
How do I generate a 64 byte key, is it as straightforward as []byte"64characterslongstring", I thought it is not always so straight forward?
To generate a slice of 64 random bytes:
package main
import "crypto/rand"
func main() {
key := make([]byte, 64)
_, err := rand.Read(key)
if err != nil {
// handle error here
}
}
Demo here.