Need aes cipher function allow input [6]byte [closed] - go

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed last year.
Improve this question
background
There is a need for a cipher can encode and decode [6]byte.
std function aes.NewCipher not allowed this, because it's definition of blocksize is 16 bytes.
Can't simply padding 6 bytes to 16 bytes. I need print [6]byte as barcode and use barcode for remote and decode in remote.
code
this can run in go playground
// You can edit this code!
// Click here and start typing.
package main
import (
"bytes"
"crypto/aes"
"fmt"
)
func main() {
plain := []byte("4512wqdeqwbuobouihodqwbuo")[:6]
encrypt := make([]byte, 6)
plain2 := make([]byte, 6)
cipher, err := aes.NewCipher([]byte("4512wqdeqwbuobouihodqwbuo")[:16])
if err != nil {
fmt.Println(err)
return
}
cipher.Encrypt(encrypt, plain)
cipher.Decrypt(plain2, encrypt)
if bytes.Compare(plain, plain2) != 0 {
fmt.Println("can't be", plain, plain2, encrypt)
}
}
error:
crypto/aes: input not full block
question
Is there a third party function can match my require?
Or std functions can achieve this in some way?
It's naive to implement this specified function by bit-shift and xor, is there more?
For new, I have implement this function with bit-shift.

There is a need for a cipher can encode and decode [6]byte.
There's a difference between encoding (display data in a different format) and encryption (providing confidentiality). Ciphers are used for encryption. Further I assume you want to encrypt data for the confidentiality reasons.
Is there a third party function can match my require?
Or std functions can achieve this in some way?
In theory - there are ways where padding is not required. Please see different modes of operation. There are modes (CTR, OFB, ..) where the padding is not needed effectively turning the block cipher into a stream cipher.
There are even dedicated stream ciphers, such as Salsa or ChaCha.
So - now you could encrypt 6 bytes of the plaintext into 6 bytes of the ciphertext.
There are two issues when you require sending the same amount of encrypted data as the plaintext:
to keep data confidential while reusing the same key for multiple encryptions, each of the ciphers need some initial state (IV), which can be random or a counter. It is imperative that the same Key and IV are not reused. So under normal circumstances this counter or state is sent along the encrypted data. Using some static vector allow to break the encryption (partly or completely). That's the reason people in the comment cannot give you simple answer. There is no proper encryption without additional data transmitted.
Another issue is with data integrity. Without transmitting additional bytes if the ciphertext is modified in transmission (intentionally or not), receiving party has no means to detect the data has been modified. I assume with 6 bytes there is no integrity control anyway, so maybe this is not your concern.
It's naive to implement this specified function by bit-shift and xor, is there more?
Yes, you can encrypt data using a static IV vector, but this is not properly encrypted as we understand it, so the knowing multiple messages or initial information, the data could be completely decrypted.
Using something simple like XOR, matrix operations, ... could as well reveal the key itself.

Related

Re-using the same encoder/decoder for the same struct type in Go without creating a new one

I was looking for the quickest/efficient way to store Structs of data to persist on the filesystem. I came across the gob module which allows encoders and decoders to be set up for structs to convert to []byte (binary) that can be stored.
This was relatively easy - here's a decoding example:
// Per item get request
// binary = []byte for the encoded binary from database
// target = struct receiving what's being decoded
func Get(path string, target *SomeType) {
binary = someFunctionToGetBinaryFromSomeDB(path)
dec := gob.NewDecoder(bytes.NewReader(binary))
dec.Decode(target)
}
However, when I benchmarked this against JSON encoder/decoder, I found it to be almost twice as slow. This was especially noticeable when I created a loop to retrieve all structs. Upon further research, I learned that creating a NEW decoder every time is really expensive. 5000 or so decoders are re-created.
// Imagine 5000 items in total
func GetAll(target *[]SomeType{}) {
results = getAllBinaryStructsFromSomeDB()
for results.next() {
binary = results.getBinary()
// Making a new decoder 5000 times
dec := gob.NewDecoder(bytes.NewReader(binary))
var target someType
dec.Decode(target)
// ... append target to []SomeType{}
}
}
I'm stuck here trying to figure out how I can recycle (reduce reuse recycle!) a decoder for list retrieval. Understanding that the decoder takes an io.Reader, I was thinking it would be possible to 'reset' the io.Reader and use the same reader at the same address for a new struct retrieval, while still using the same decoder. I'm not sure how to go about doing that and I'm wondering if anyone has any ideas to shed some light. What I'm looking for is something like this:
// Imagine 5000 items in total
func GetAll(target *[]SomeType{}) {
// Set up some kind of recyclable reader
var binary []byte
reader := bytes.NewReader(binary)
// Make decoder based on that reader
dec := gob.NewDecoder(reader)
results = getAllBinaryStructsFromSomeDB()
for results.next() {
// Insert some kind of binary / decoder reset
// Then do something like:
reader.WriteTo(results.nextBinary())
var target someType
dec.Decode(target) // except of course this won't work
// ... append target to []SomeType{}
}
}
Thanks!
I was looking for the quickest/efficient way to store Structs of data to persist on the filesystem
Instead of serializing your structs, represent your data primarily in a pre-made data store that fits your usage well. Then model that data in your Go code.
This may seem like the hard way or the long way to store data, but it will solve your performance problem by intelligently indexing your data and allowing filtering to be done without a lot of filesystem access.
I was looking for ... data to persist.
Let's start there as a problem statement.
gob module allows encoders and decoders to be set up for structs to convert to []byte (binary) that can be stored.
However, ... I found it to be ... slow.
It would be. You'd have to go out of your way to make data storage any slower. Every object you instantiate from your storage will have to come from a filesystem read. The operating system will cache these small files well, but you'll still be reading the data every time.
Every change will require rewriting all the data, or cleverly determining which data to write to disk. Recall that there is no "insert between" operation for files; you'll be rewriting all bytes after to add bytes in the middle of a file.
You could do this concurrently, of course, and goroutines handle a bunch of async work like filesystem reads very well. But now you've got to start thinking about locking.
My point is, for the cost of trying to serialize your structures you can better describe your data at the persistent layer, and solve problems you're not even working on yet.
SQL is a pretty obvious choice, since you can make it work with sqlite as well as other sql servers that scale well; I hear mongodb is easy to wrangle these days, and depending on what you're doing with the data, redis has a number of attractive list, set and k/v operations that can easily be made atomic and consistent.
The encoder and decoder are designed to work with streams of values. The encoder writes information describing a Go type to the stream once before transmitting the first value of the type. The decoder retains received type information for decoding subsequent values.
The type information written by the encoder is dependent on the order that the encoder encounters unique types, the order of fields in structs and more. To make sense of the stream, a decoder must read the complete stream written by a single encoder.
It is not possible to recycle decoders because of the way that type information is transmitted.
To make this more concrete, the following does not work:
var v1, v2 Type
var buf bytes.Buffer
gob.NewEncoder(&buf).Encode(v1)
gob.NewEncoder(&buf).Encode(v2)
var v3, v4 Type
d := gob.NewDecoder(&buf)
d.Decode(&v3)
d.Decode(&v4)
Each call to Encode writes information about Type to the buffer. The second call to Decode fails because a duplicate type is received.

How to save, and then serve again data of type io.Reader?

I would like to parse several times with gocal data I retrieve through a HTTP call. Since I would like to avoid making the call for each of the parsing, I would like to save this data and reuse it.
The Body I get from http.Get is of type io.ReadCloser. The gocal parser requires io.Reader so it works.
Since I can retrieve Body only once, I can save it with body, _ := io.ReadAll(get.Body) but then I do not know how to serve []byte as io.Reader back (to the gocal parser, several times to account for different parsing conditions)
As you have figured, the http.Response.Body is exposed as an io.Reader, this reader is not re usable because it is connected straight to the underlying connection* (might be tcp/utp/or any other stream like reader under the net package).
Once you read the bytes out of the connection, new bytes are sitting their waiting for another read.
In order to save the response, indeed, you need to drain it first, and save that result within a variable.
body, _ := io.ReadAll(get.Body)
To re use that slice of bytes many time using the Go programming language, the standard API provides a buffered reader bytes.NewReader.
This buffer adequately offers the Reset([]byte) method to reset the state of the buffer.
The bytes.Reader.Reset is very useful to read multiple times the same bytes buffer with no allocations. In comparison, bytes.NewReader allocates every time it is called.
Finally, between two consecutive calls to c.Parser, you should reset the buffer with bytes buffer you have collected previously.
such as :
buf := bytes.NewReader(body)
// initialize the parser
c.Parse()
// process the result
// reset the buf, parse again
buf.Reset(body)
c.Parse()
You can try this version https://play.golang.org/p/YaVtCTZHZEP It uses the strings.NewReader buffer, but the interface and behavior are similar.
not super obvious, that is the general principle, the transport reads the headers, and leave the body untouched unless you consume it. see also that.

Should I make a validation for user's Key Pairs generated by RSA package that the key pairs is unique(other users doesn't have the same key pairs)?

I recently working with digital signature and want to figure out how to make each user that registered having their own key pairs for encode and decode process. I building this kind of system using Go, and I use crypto/rsa package from Go. I already read some articles about how to make a secure digital signature and finding many kind of things. Then, I try to build the first thing for secure the process that is asymmetric encryption.
Then, the first problem that I facing is I ask myself a question "Should I create a validation that no other user has the key pairs generated by the RSA package?" so that will ensure each user can not pretend as other user by accident or on purpose because they have same key pairs(even if the chance is really small).
Please give me some insight about this kind of situation. If my question is not clear enough feel free to ask or complaint, I really having a hard time thinking about any security aspect for my user and system.
import (
"crypto/rand"
"crypto/rsa"
"encoding/pem"
...
)
...
func createKeyPairs(userRegistered *User) (err error) {
keyPairs, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return err
}
// SHOULD I ADD SOME VALIDATION FOR THE KEYPAIRS GENERATED BY CRYPTO RSA AND RAND PACKAGE HERE
caPrivateKeyPEMFile, err := os.Create(userRequestingCA.ID + "PrivateKey.pem")
pem.Encode(caPrivateKeyPEMFile, &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(keyPairs),
})
caPublicKeyPEMFile, err := os.Create(userRequestingCA.ID + "PublicKey.pem")
pem.Encode(caPublicKeyPEMFile, &pem.Block{
Type: "RSA PUBLIC KEY",
Bytes: x509.MarshalPKCS1PublicKey(&keyPairs.PublicKey),
})
}
No, you shouldn't.
Mainly because strict private key comparison is not sufficient, you would need to make sure the two primes in the modulus are different.
The second reason is that it would be mostly pointless: the likelihood of choosing the same prime numbers is incredibly low, you would just be wasting your time.
Given a 4096 bit RSA key, you're looking for two 2048 bit prime numbers. The chances of collision for those are astronomically small.
One case where it might be useful would be if you had terrible entropy on your machine. But then you probably have other problems as well, and that's probably a separate question.
For more details on why the modulus primes are important (as opposed to the raw key contents), and the details on calculating the likelihood of prime collisions, please see this security.se question.
A third reason is that it would require you to keep the parameters of all users' private keys. You definitely shouldn't, and you probably shouldn't be generating key pairs on their behalf in the first place.

How do I interpret a python byte string coming from F1 2020 game UDP packet?

Title may be wildly incorrect for what I'm trying to work out.
I'm trying to interpret packets I am recieving from a racing game in a way that I understand, but I honestly don't really know what I'm looking at, or what to search to understand it.
Information on the packets I am recieving here:
https://forums.codemasters.com/topic/54423-f1%C2%AE-2020-udp-specification/?tab=comments#comment-532560
I'm using python to print the packets, here's a snippet of the output, which I don't understand how to interpret.
received message: b'\xe4\x07\x01\x03\x01\x07O\x90.\xea\xc2!7\x16\xa5\xbb\x02C\xda\n\x00\x00\x00\xff\x01\x00\x03:\x00\x00\x00 A\x00\x00\xdcB\xb5+\xc1#\xc82\xcc\x10\t\x00\xd9\x00\x00\x00\x00\x00\x12\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00$tJ\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01
I'm very new to coding, and not sure what my next step is, so a nudge in the right direction will help loads, thanks.
This is the python code:
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 20777
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(4096)
print ("received message:", data)
The website you link to is describing the data format. All data represented as a series of 1's and 0's. A byte is a series of 8 1's and 0's. However, just because you have a series of bytes doesn't mean you know how to interpret them. Do they represent a character? An integer? Can that integer be negative? All of that is defined by whoever crafted the data in the first place.
The type descriptions you see at the top are telling you how to actually interpret that series of 1's and 0's. When you see "unit8", that is an "unsigned integer that is 8 bits (1 byte) long". In other words, a positive number between 0 and 255. An "int8" on the other hand is an "8-bit integer", or a number that can be positive or negative (so the range is -128 to 127). The same basic idea applies to the *16 and *64 variants, just with 16 bits or 64 bits. A float represent a floating point number (a number with a fractional part, such as 1.2345), generally 4 bytes long. Additionally, you need to know the order to interpret the bytes within a word (left-to-right or right-to-left). This is referred to as the endianness, and every computer architecture has a native endianness (big-endian or little-endian).
Given all of that, you can interpret the PacketHeader. The easiest way is probably to use the struct package in Python. Details can be found here:
https://docs.python.org/3/library/struct.html
As a proof of concept, the following will interpret the first 24 bytes:
import struct
data = b'\xe4\x07\x01\x03\x01\x07O\x90.\xea\xc2!7\x16\xa5\xbb\x02C\xda\n\x00\x00\x00\xff\x01\x00\x03:\x00\x00\x00 A\x00\x00\xdcB\xb5+\xc1#\xc82\xcc\x10\t\x00\xd9\x00\x00\x00\x00\x00\x12\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00$tJ\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01'
#Note that I am only taking the first 24 bytes. You must pass data that is
#the appropriate length to the unpack function. We don't know what everything
#else is until after we parse out the header
header = struct.unpack('<HBBBBQfIBB', data[:24])
print(header)
You basically want to read the first 24 bytes to get the header of the message. From there, you need to use the m_packetId field to determine what the rest of the message is. As an example, this particular packet has a packetId of 7, which is a "Car Status" packet. So you would look at the packing format for the struct CarStatus further down on that page to figure out how to interpret the rest of the message. Rinse and repeat as data arrives.
Update: In the format string, the < tells you to interpret the bytes as little-endian with no alignment (based on the fact that the documentation says it is little-endian and packed). I would recommend reading through the entire section on Format Characters in the documentation above to fully understand what all is happening regarding alignment, but in a nutshell it will try to align those bytes with their representation in memory, which may not match exactly the format you specify. In this case, HBBBBQ takes up 2 bytes more than you'd expect. This is because your computer will try to pack structs in memory so that they are word-aligned. Your computer architecture determines the word alignment (on a 64-bit computer, words are 64-bits, or 8 bytes, long). A Q takes a full word, so the packer will try to align everything before the Q to a word. However, HBBBB only requires 6 bytes; so, Python will, by default, pad an extra 2 bytes to make sure everything lines up. Using < at the front both ensures that the bytes will be interpreted in the correct order, and that it won't try to align the bytes.
Just for information if someone else is looking for this. In python there is the library f1-2019-telemetry existing. On the documentation, there is a missing part about the "how to use" so here is a snippet:
from f1_2020_telemetry.packets import *
...
udp_socket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)
udp_socket.bind((host, port))
while True:
udp_packet = udp_socket.recv(2048)
packet = unpack_udp_packet(udp_packet)
if isinstance(packet, PacketSessionData_V1): # refer to doc for classes / attribute
print(packet.trackTemperature) # for example
if isinstance(packet, PacketParticipantsData_V1):
for i, participant in enumerate(packet.participants):
print(DriverIDs[participant.driverId]) # the library has some mapping for pilot name / track name / ...
Regards,
Nicolas

How to use audioConverterFillComplexBuffer and its callback?

I need a step by step walkthrough on how to use audioConverterFillComplexBuffer and its callback. No, don't tell me to read the Apple docs. I do everything they say and the conversion always fails. No, don't tell me to go look for examples of audioConverterFillComplexBuffer and its callback in use - I've duplicated about a dozen such examples both line for line and modified and the conversion always fails. No, there isn't any problem with the input data. No, it isn't an endian issue. No, the problem isn't my version of OS X.
The problem is that I don't understand how audioConverterFillComplexBuffer works, so I don't know what I'm doing wrong. And nothing out there is helping me understand, because it seems like nobody on Earth really understands how audioConverterFillComplexBuffer works, either. From the people who actually use it(I spy cargo cult programming in their code) to even the authors of Learning Core Audio and/or Apple itself(http://stackoverflow.com/questions/13604612/core-audio-how-can-one-packet-one-byte-when-clearly-one-packet-4-bytes).
This isn't just a problem for me, it's a problem for anybody who wants to program high-performance audio on the Mac platform. Threadbare documentation that's apparently wrong and examples that don't work are no fun.
Once again, to be clear: I NEED A STEP BY STEP WALKTHROUGH ON HOW TO USE audioConverterFillComplexBuffer plus its callback and so does the entire Mac developer community.
This is a very old question but I think is still relevant. I've spent a few days fighting this and have finally achieved a successful conversion. I'm certainly no expert but I'll outline my understanding of how it works. Note I'm using Swift, which I'm also just learning.
Here are the main function arguments:
inAudioConverter: AudioConverterRef: This one is simple enough, just pass in a previously created AudioConverterRef.
inInputDataProc: AudioConverterComplexInputDataProc: The very complex callback. We'll come back to this.
inInputDataProcUserData, UnsafeMutableRawPointer?: This is a reference to whatever data you may need to be provided to the callback function. Important because even in swift the callback can't inherit context. E.g. you may need to access an AudioFileID or keep track of the number of packets read so far.
ioOutputDataPacketSize: UnsafeMutablePointer<UInt32>: This one is a little misleading. The name implies it's the packet size but reading the documentation we learn it's the total number of packets expected for the output format. You can calculate this as outPacketCount = frameCount / outStreamDescription.mFramesPerPacket.
outOutputData: UnsafeMutablePointer<AudioBufferList>: This is an audio buffer list which you need to have already initialized with enough space to hold the expected output data. The size can be calculated as byteSize = outPacketCount * outMaxPacketSize.
outPacketDescription: UnsafeMutablePointer<AudioStreamPacketDescription>?: This is optional. If you need packet descriptions, pass in a block of memory the size of outPacketCount * sizeof(AudioStreamPacketDescription).
As the converter runs it will repeatedly call the callback function to request more data to convert. The main job of the callback is simply to read the requested number packets from the source data. The converter will then convert the packets to the output format and fill the output buffer. Here are the arguments for the callback:
inAudioConverter: AudioConverterRef: The audio converter again. You probably won't need to use this.
ioNumberDataPackets: UnsafeMutablePointer<UInt32>: The number of packets to read. After reading, you must set this to the number of packets actually read (which may be less than the number requested if we reached the end).
ioData: UnsafeMutablePointer<AudioBufferList>: An AudioBufferList which is already configured except for the actual data. You need to initialise ioData.mBuffers.mData with enough capacity to hold the expected number of packets, i.e. ioNumberDataPackets * inMaxPacketSize. Set the value of ioData.mBuffers.mDataByteSize to match.
outDataPacketDescription: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?: Depending on the formats used, the converter may need to keep track of packet descriptions. You need to initialise this with enough capacity to hold the expected number of packet descriptions.
inUserData: UnsafeMutableRawPointer?: The user data that you provided to the converter.
So, to start you need to:
Have sufficient information about your input and output data, namely the number of frames and maximum packet sizes.
Initialise an AudioBufferList with sufficient capacity to hold the output data.
Call AudioConverterFillComplexBuffer.
And on each run of the callback you need to:
Initialise ioData with sufficient capacity to store ioNumberDataPackets of source data.
Initialise outDataPacketDescription with sufficient capacity to store ioNumberDataPackets of AudioStreamPacketDescriptions.
Fill the buffer with source packets.
Write the packet descriptions.
Set ioNumberDataPackets to the number of packets actually read.
return noErr if successful.
Here's an example where I read the data from an AudioFileID:
var converter: AudioConverterRef?
// User data holds an AudioFileID, input max packet size, and a count of packets read
var uData = (fRef, maxPacketSize, UnsafeMutablePointer<Int64>.allocate(capacity: 1))
err = AudioConverterNew(&inStreamDesc, &outStreamDesc, &converter)
err = AudioConverterFillComplexBuffer(converter!, { _, ioNumberDataPackets, ioData, outDataPacketDescription, inUserData in
let uData = inUserData!.load(as: (AudioFileID, UInt32, UnsafeMutablePointer<Int64>).self)
ioData.pointee.mBuffers.mDataByteSize = uData.1
ioData.pointee.mBuffers.mData = UnsafeMutableRawPointer.allocate(byteCount: Int(uData.1), alignment: 1)
outDataPacketDescription?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: Int(ioNumberDataPackets.pointee))
let err = AudioFileReadPacketData(uData.0, false, &ioData.pointee.mBuffers.mDataByteSize, outDataPacketDescription?.pointee, uData.2.pointee, ioNumberDataPackets, ioData.pointee.mBuffers.mData)
uData.2.pointee += Int64(ioNumberDataPackets.pointee)
return err
}, &uData, &numPackets, &bufferList, nil)
Again, I'm no expert, this is just what I've learned by trial and error.

Resources