Write 2 bytes of data, one byte and one byte in reverse by golang - go

I have to use a specific C library by Golang and its API in C is
int write(unsigned short usCount, unsigned short usData[]);
Cannot use like following because its not support indexing for ushort
data := make([]C.ushort, 4)
// transfer string to ascii by char and save it to data
data[0][1] = int(char)
data[0][0] = int(char)
data[1][1] = int(char)
data[1][0] = int(char)
...etc
write(4, data);
Is any other good operations for this case?
P.S. The indexing is reverse is because I am sending a little endian data.

Related

Storing images using Blob data type in aerospike

I am trying to write image ( .png/.jpeg/...etc) files to aerospike. From aerospike documentation (https://docs.aerospike.com/server/guide/data-types/blob) I understand that this can be achieved using blob data type. However there is no documentation on using blob data type in C API library (https://docs.aerospike.com/apidocs/c/, https://developer.aerospike.com/client/c) or in aql. I know writing other data types (like int, string, CDT..etc) to aerospike using C API library but using blob data type I am not getting from where to start. Can someone help with documentation on using blob data type in aerospike C API library.
Did you look at: as_record_set_bytes() https://docs.aerospike.com/apidocs/c/df/dd6/structas__record.html#aea70facda82be0916d0a5342d81ceb2b
https://docs.aerospike.com/apidocs/c/d9/d3c/structas__bytes.html
As answered by #pgupta as_record_set_bytes() can be used to write bytes to aerospike. C code snippet to write is as follows:
as_bytes b;
//Reading file into char array
fp = fopen(img_path, "rb"); //Open the file in binary read mode.
fseek(fp, 0, SEEK_END); //jump to end of file
filelen=ftell(fp); //Get current byteoffset
rewind(fp); //jump back to begining of the file
buffer = (uint8_t *)malloc(filelen*sizeof(uint8_t));
fread(buffer, filelen, 1, fp);
fclose(fp);
//set as_bytes
as_bytes_init(&b, filelen);
as_bytes_set(&b, 0, buffer, filelen);
as_record_set_bytes(&rec, "attachment", &b); //Set blob data
Bytes can be retrived using as_record_get_bytes(). as_bytes can be converted into uint8_t* type using as_bytes_copy(). C code snippet to read is as follows:
as_bytes* b;
uint8_t* res;
unint32_t res_size;
aerospike_key_get(as, &err, NULL, &key, &rec);
b = as_record_get_bytes(rec, bin_name);
res = malloc((b->size)*sizeof(uint8_t));
res_size = as_bytes_copy(b, 0, res, b->size);
Detailed description of each of the aerospike functions can be found at C API Client Aerospike documentation (https://docs.aerospike.com/apidocs/c/index.html).

Decoding of char array data

I have a mesh from which I need to read the vertex positions of but I can just get a buffer with that data, which I seemingly can get as an utf-8 char array.
Currently I'm getting the data from the buffer into the array I metioned and wirte it into a char* but i can't get the decoding correctly or so it seems.
The following code reads the dara from the buffer:
char* GetDataFromIBuffer(Windows::Storage::Streams::IBuffer^ container)
{
unsigned int bufferLength = container->Length;
auto dataReader = Windows::Storage::Streams::DataReader::FromBuffer(container);
Platform::Array<unsigned char>^ managedBytes =
ref new Platform::Array<unsigned char>(bufferLength);
dataReader->ReadBytes(managedBytes);
char * bytes = new char[bufferLength];
for (unsigned int i = 0; i < bufferLength; i++)
{
if (managedBytes[i] == '\0')
{
bytes[i] = '0';
}
else
{
bytes[i] = managedBytes[i];
}
}
}
I can see the data in debug mode but i need a method to make it readable and write it into a file, where i can copy the mesh data and draw the mesh in a seperate program.
The following image shows the array data which can be seen in the array:
debug mode
Be careful not to mix up text encoding and data types.
char is a type often used for buffers because it has the size of a byte, but that doesn't mean that the data contained in the buffer is text.
Your debug view seem to confirm that the data inside your buffer is not text, because when interpreted as text, it gives weird characters such as 'ΓΏ', '^', etc...
UTF-8 is a way to encode unicode text, so it has nothing to do with binary data.
You need to find a way to cast your buffer data info the internal type of the data, it should be documented where you got that data (maybe it's just an array of floats ?)

Mifare cards: distinguish between 4-byte and 7-byte UIDs

I have a card reader that always report 64 bits, and can read cards with 4 or 7 byte UIDs.
As an example, I see it can report:
04-18-c5-82-00-00-00-00 - a 4-byte UID in the form uid0-uid1-uid2-uid3-00-00-00-00
04-18-c5-82-f1-3b-81-00 - a 7-byte UID in the form uid0-uid1-uid2-uid3-uid4-uid5-uid6-00
What prevents a 7-byte UID from having uid4, uid5 and uid6 set to zero? Is this covered in a spec? If so, which spec?
What prevents a 7-byte UID from having uid4, uid5 and uid6 set to zero?
Nothing. The format of the UID (as used by MIFARE cards) is defined in ISO/IEC 14443-3. Specifically for MIFARE cards, NXP has (or at least had?) some further allocation logic for 4 byte UIDs, but that's not publicly available.
Is it possible to distinguish the two cases?
If the reader outputs the UIDs exactly in the form that you showed in your example, then the answer is no (at least not reliably). However, some readers output the UID on 8 bytes and include the cascade tag for 7-byte-UIDs. Thus, all 7-byte-UIDs start with 0x88 for those readers. This does not seem to be the case with your reader.
Are there possible strategies to distinguish the two cases?
Some strategies come to my mind to distinguish 4-byte-UIDs from 7-byte-UIDs.
The first byte of a 7-byte-UID is the manufacturer code (as defined in ISO/IEC 7816-6 (see How to detect manufacturer from NFC tag using Android? on how to obtain the list). Thus, if you have a limited set of manufacturers (e.g. if you only use MIFARE cards with chips from NXP), you could interpret all UIDs that start with NXP's manufacturer code (0x04) as 7-byte-UIDs. Nevertheless, you should be aware that 4-byte-UIDs are allowed to start with 0x04 as well. Hence, this method is not 100% reliable and may fail for some cases.
The first byte of 4-byte-UIDs must not contain any of the following values: 'x8' (with x != '0'), 'xF'. If you find the first byte to match any of those values, you can assume the UID to consist of 7 bytes.
if you can get the ATQA response you can distinguish it. The lower byte of the ATQA shows you how long the UID is. (4/7/10Byte) As far as I know there is no other way to distinguish with 100% assurance
br
I know its bit late to the party, but for any one who is having the same doubt;
the documented way of creating a 7 Byte UID out of 4 Byte UID is in the Annex-6 of this pdf.
In case the page goes down, a shameless rip off from that page is given below.
Any and all mistakes if you find in the below code snippet rightfully belongs to NXP guys, not me.
But how do you know whether the tag is a 4 byte or 7 byte uid one?
From the ATQA response. Refer page 15/36 of the document 1 and page 8/15 of document 2 .
In case the document goes down, here is the relevant excerpt from the document 1.
The MF1S50yyX/V1 answers to a REQA or WUPA command with the ATQA value
shown in Table 11 and to a Select CL1 command (CL2 for the 7-byte UID variant) with the SAK value shown in Table 12.
Remark: The ATQA coding in bits 7 and 8 indicate the UID size according to ISO/IEC 14443 independent from the settings of the UID usage.
6. Annex, Source code to derive NUID out of a Double Size UID
#include <stdio.h>
#include <conio.h>
#include <stdlib.h>
#include <time.h>
#define BYTE unsigned char
unsigned short UpdateCrc(unsigned char ch, unsigned short *lpwCrc)
{
ch = (ch^(unsigned char)((*lpwCrc) & 0x00FF));
ch = (ch^(ch<<4));
*lpwCrc = (*lpwCrc >> 8)^((unsigned short)ch << 8)^((unsigned
short)ch<<3)^((unsigned short)ch>>4);
return(*lpwCrc);
}
void ComputeCrc(unsigned short wCrcPreset, unsigned char *Data, int
Length, unsigned short &usCRC)
{
unsigned char chBlock;
do {
chBlock = *Data++;
UpdateCrc(chBlock, &wCrcPreset);
} while (--Length);
usCRC = wCrcPreset;
return;
}
void Convert7ByteUIDTo4ByteNUID(unsigned char *uc7ByteUID, unsigned char
*uc4ByteUID)
{
unsigned short CRCPreset = 0x6363;
unsigned short CRCCalculated = 0x0000;
ComputeCrc(CRCPreset, uc7ByteUID, 3,CRCCalculated);
uc4ByteUID[0] = (CRCCalculated >>8)&0xFF;//MSB
uc4ByteUID[1] = CRCCalculated &0xFF; //LSB
CRCPreset = CRCCalculated;
ComputeCrc(CRCPreset, uc7ByteUID+3, 4,CRCCalculated);
uc4ByteUID[2] = (CRCCalculated >>8)&0xFF;//MSB
uc4ByteUID[3] = CRCCalculated &0xFF; //LSB
uc4ByteUID[0] = uc4ByteUID[0]|0x0F;
uc4ByteUID[0] = uc4ByteUID[0]& 0xEF;
}
int main(void)
{
int i;
unsigned char uc7ByteUID[7] =
{0x04,0x18,0x3F,0x09,0x32,0x1B,0x85};//4F505D7D
unsigned char uc4ByteUID[4] = {0x00};
Convert7ByteUIDTo4ByteNUID(uc7ByteUID,uc4ByteUID);
printf("7-byte UID = ");
for(i = 0;i<7;i++)
printf("%02X",uc7ByteUID[i]);
printf("\t4-byte FNUID = ");
for(i = 0;i<4;i++)
printf("%02X",uc4ByteUID[i]);
getch();
return(0);
}
If you came here (as I did) to find a proper way to automatically get the UID from a card independent if it is a 4, 7 or 10 byte UID, I do it dynamically as following (found this logic somewhere on the internet but can't find it anymore to give proper credits. 10 Bytes not tested):
(this is C# code and is using winscard.dll under the hood):
public static UInt64 getCardUIDasUInt64() // *** only for mifare 1k cards ***
{
UInt64 UID = 0;
byte[] receivedUID = new byte[10]; // ***
Card.SCARD_IO_REQUEST request = new Card.SCARD_IO_REQUEST();
request.dwProtocol = (UInt32)Protocol; // *** use the detected protocol instead of statically assigned protocol type *** // Card.SCARD_PROTOCOL_T1;
request.cbPciLength = (UInt32)System.Runtime.InteropServices.Marshal.SizeOf(typeof(Card.SCARD_IO_REQUEST));
byte[] sendBytes = new byte[] { 0xFF, 0xCA, 0x00, 0x00, 0x00 }; //get UID command for Mifare cards
//byte[] sendBytes = new byte[] { 0xFF, 0xCA, 0x00, 0x00, 0x04 }; //get UID command for Mifare cards
int receivedBytesLength = receivedUID.Length;
int status = Card.SCardTransmit(hCard, ref request, ref sendBytes[0], sendBytes.Length, ref request, ref receivedUID[0], ref receivedBytesLength);
if (status == Card.SCARD_S_SUCCESS)
{
if (receivedBytesLength >= 2)
{
// do we have an error
if ((receivedUID[receivedBytesLength - 2] != 0x90) ||
(receivedUID[receivedBytesLength - 1] != 0x00))
{
throw new Exception(receivedUID[receivedBytesLength - 2].ToString());
}
else if (receivedBytesLength > 2)
{
for (UInt32 i = 0; i != receivedBytesLength - 2; i++)
{
UID <<= 8;
UID |= (UInt64)(receivedUID[i]);
};
}
}
else
{
throw new Exception(ResourceHandling.getTextResource("Error_Card_Read"));
}
}
return UID;
}
If you need the UID in hex, then use this (in addition to above code):
public static string getCardUIDasHex() // *** only for mifare 1k cards ***
{
UInt64 cardUID = getCardUIDasUInt64();
return string.Format("{0:X}", cardUID);
}
Maybe this is also of help to someone else as in the internet (also here in SO) there are many places which just read out the 1st four bytes of the UID which is just not correct anymore today.

C Program Strange Characters retrieved due to language setting on Windows

If the below code is compiled with UNICODE as compiler option, the GetComputerNameEx API returns junk characters.
Whereas if compiled without UNICODE option, the API returns truncated value of the hostname.
This issue is mostly seen with Asia-Pacific languages like Chinese, Japanese, Korean to name a few (i.e., non-English).
Can anyone throw some light on how this issue can be resolved.
# define INFO_SIZE 30
int main()
{
int ret;
TCHAR infoBuf[INFO_SIZE+1];
DWORD bufSize = (INFO_SIZE+1);
char *buf;
buf = (char *) malloc(INFO_SIZE+1);
if (!GetComputerNameEx((COMPUTER_NAME_FORMAT)1,
(LPTSTR)infoBuf, &bufSize))
{
printf("GetComputerNameEx failed (%d)\n", GetLastError());
return -1;
}
ret = wcstombs(buf, infoBuf, (INFO_SIZE+1));
buf[INFO_SIZE] = '\0';
return 0;
}
In the languages you mentioned, most characters are represented by more than one byte. This is because these languages have alphabets of much more than 256 characters. So you may need more than 30 bytes to encode 30 characters.
The usual pattern for calling a function like wcstombs goes like this: first get the amount of bytes required, then allocate a buffer, then convert the string.
(edit: that actually relies on a POSIX extension, which also got implemented on Windows)
size_t size = wcstombs(NULL, infoBuf, 0);
if (size == (size_t) -1) {
// some character can't be converted
}
char *buf = new char[size + 1];
size = wcstombs(buf, infoBuf, size + 1);

libb64 C routines proper usage

I am having trouble with the C routines in libb64, here is my code:
base64_encodestate state;
int outBufLen = 2 * nInBuf;
*outBuf = new char[outBufLen];
base64_init_encodestate(&state);
int r1 = base64_encode_block(inBuf, nInBuf, *outBuf, &state);
int r2 = base64_encode_blockend(*outBuf, &state);
base64_init_encodestate(&state);
This puts the = at the beginning, not at the end.
So I tried this:
base64_encodestate state;
int outBufLen = 2 * nInBuf;
*outBuf = new char[outBufLen];
base64_init_encodestate(&state);
int r1 = base64_encode_block(inBuf, nInBuf, *outBuf, &state);
int r2 = base64_encode_blockend(*outBuf+ r1, &state);
base64_init_encodestate(&state);
This works, but not for "large" (~800KB text) files, then it skips the end = entirely. In that case base64_encode_blockend(code_out,state), enters case step_C where state->result = 0. I tried writing the b64 data to a file using the size reported by the libb64 functions, but it misses the end or is impartial. Im not sure.
I'm pretty much fed up with this. I based my code on the struct encode and decode.
Also do anyone know if there is a Windows API for base64 encode/decode? I am not using any C++ standard stuff, thats why I don't use the structs.

Resources