Hi I'm confused by the example code given here to check if a machine is little or big endian:
Little Endian or Big Endian?
int isLittleEndian( void )
{
unsigned int temp = 0x12345678;
char * tempAddress = &temp;
if( *tempAddress == 0x12 )
{
return 0; // Indicating False
} else {
return 1; // Indicating True
}
}
Versus this description of little and big endianness given here:
http://support.microsoft.com/kb/102025
The second link says on an LE machine 0x1234 is stored in memory as 0x34 0x12, however the isLittleEndian() function in the first link is returning true if the first byte is 0x12. Isn't that a contradiction of the 2nd link? If not then what is it I've misunderstood?
No, the function returns false if the first byte is 0x12. Which is as it should be, and jives with the description of endianness.
Related
I have a card reader that always report 64 bits, and can read cards with 4 or 7 byte UIDs.
As an example, I see it can report:
04-18-c5-82-00-00-00-00 - a 4-byte UID in the form uid0-uid1-uid2-uid3-00-00-00-00
04-18-c5-82-f1-3b-81-00 - a 7-byte UID in the form uid0-uid1-uid2-uid3-uid4-uid5-uid6-00
What prevents a 7-byte UID from having uid4, uid5 and uid6 set to zero? Is this covered in a spec? If so, which spec?
What prevents a 7-byte UID from having uid4, uid5 and uid6 set to zero?
Nothing. The format of the UID (as used by MIFARE cards) is defined in ISO/IEC 14443-3. Specifically for MIFARE cards, NXP has (or at least had?) some further allocation logic for 4 byte UIDs, but that's not publicly available.
Is it possible to distinguish the two cases?
If the reader outputs the UIDs exactly in the form that you showed in your example, then the answer is no (at least not reliably). However, some readers output the UID on 8 bytes and include the cascade tag for 7-byte-UIDs. Thus, all 7-byte-UIDs start with 0x88 for those readers. This does not seem to be the case with your reader.
Are there possible strategies to distinguish the two cases?
Some strategies come to my mind to distinguish 4-byte-UIDs from 7-byte-UIDs.
The first byte of a 7-byte-UID is the manufacturer code (as defined in ISO/IEC 7816-6 (see How to detect manufacturer from NFC tag using Android? on how to obtain the list). Thus, if you have a limited set of manufacturers (e.g. if you only use MIFARE cards with chips from NXP), you could interpret all UIDs that start with NXP's manufacturer code (0x04) as 7-byte-UIDs. Nevertheless, you should be aware that 4-byte-UIDs are allowed to start with 0x04 as well. Hence, this method is not 100% reliable and may fail for some cases.
The first byte of 4-byte-UIDs must not contain any of the following values: 'x8' (with x != '0'), 'xF'. If you find the first byte to match any of those values, you can assume the UID to consist of 7 bytes.
if you can get the ATQA response you can distinguish it. The lower byte of the ATQA shows you how long the UID is. (4/7/10Byte) As far as I know there is no other way to distinguish with 100% assurance
br
I know its bit late to the party, but for any one who is having the same doubt;
the documented way of creating a 7 Byte UID out of 4 Byte UID is in the Annex-6 of this pdf.
In case the page goes down, a shameless rip off from that page is given below.
Any and all mistakes if you find in the below code snippet rightfully belongs to NXP guys, not me.
But how do you know whether the tag is a 4 byte or 7 byte uid one?
From the ATQA response. Refer page 15/36 of the document 1 and page 8/15 of document 2 .
In case the document goes down, here is the relevant excerpt from the document 1.
The MF1S50yyX/V1 answers to a REQA or WUPA command with the ATQA value
shown in Table 11 and to a Select CL1 command (CL2 for the 7-byte UID variant) with the SAK value shown in Table 12.
Remark: The ATQA coding in bits 7 and 8 indicate the UID size according to ISO/IEC 14443 independent from the settings of the UID usage.
6. Annex, Source code to derive NUID out of a Double Size UID
#include <stdio.h>
#include <conio.h>
#include <stdlib.h>
#include <time.h>
#define BYTE unsigned char
unsigned short UpdateCrc(unsigned char ch, unsigned short *lpwCrc)
{
ch = (ch^(unsigned char)((*lpwCrc) & 0x00FF));
ch = (ch^(ch<<4));
*lpwCrc = (*lpwCrc >> 8)^((unsigned short)ch << 8)^((unsigned
short)ch<<3)^((unsigned short)ch>>4);
return(*lpwCrc);
}
void ComputeCrc(unsigned short wCrcPreset, unsigned char *Data, int
Length, unsigned short &usCRC)
{
unsigned char chBlock;
do {
chBlock = *Data++;
UpdateCrc(chBlock, &wCrcPreset);
} while (--Length);
usCRC = wCrcPreset;
return;
}
void Convert7ByteUIDTo4ByteNUID(unsigned char *uc7ByteUID, unsigned char
*uc4ByteUID)
{
unsigned short CRCPreset = 0x6363;
unsigned short CRCCalculated = 0x0000;
ComputeCrc(CRCPreset, uc7ByteUID, 3,CRCCalculated);
uc4ByteUID[0] = (CRCCalculated >>8)&0xFF;//MSB
uc4ByteUID[1] = CRCCalculated &0xFF; //LSB
CRCPreset = CRCCalculated;
ComputeCrc(CRCPreset, uc7ByteUID+3, 4,CRCCalculated);
uc4ByteUID[2] = (CRCCalculated >>8)&0xFF;//MSB
uc4ByteUID[3] = CRCCalculated &0xFF; //LSB
uc4ByteUID[0] = uc4ByteUID[0]|0x0F;
uc4ByteUID[0] = uc4ByteUID[0]& 0xEF;
}
int main(void)
{
int i;
unsigned char uc7ByteUID[7] =
{0x04,0x18,0x3F,0x09,0x32,0x1B,0x85};//4F505D7D
unsigned char uc4ByteUID[4] = {0x00};
Convert7ByteUIDTo4ByteNUID(uc7ByteUID,uc4ByteUID);
printf("7-byte UID = ");
for(i = 0;i<7;i++)
printf("%02X",uc7ByteUID[i]);
printf("\t4-byte FNUID = ");
for(i = 0;i<4;i++)
printf("%02X",uc4ByteUID[i]);
getch();
return(0);
}
If you came here (as I did) to find a proper way to automatically get the UID from a card independent if it is a 4, 7 or 10 byte UID, I do it dynamically as following (found this logic somewhere on the internet but can't find it anymore to give proper credits. 10 Bytes not tested):
(this is C# code and is using winscard.dll under the hood):
public static UInt64 getCardUIDasUInt64() // *** only for mifare 1k cards ***
{
UInt64 UID = 0;
byte[] receivedUID = new byte[10]; // ***
Card.SCARD_IO_REQUEST request = new Card.SCARD_IO_REQUEST();
request.dwProtocol = (UInt32)Protocol; // *** use the detected protocol instead of statically assigned protocol type *** // Card.SCARD_PROTOCOL_T1;
request.cbPciLength = (UInt32)System.Runtime.InteropServices.Marshal.SizeOf(typeof(Card.SCARD_IO_REQUEST));
byte[] sendBytes = new byte[] { 0xFF, 0xCA, 0x00, 0x00, 0x00 }; //get UID command for Mifare cards
//byte[] sendBytes = new byte[] { 0xFF, 0xCA, 0x00, 0x00, 0x04 }; //get UID command for Mifare cards
int receivedBytesLength = receivedUID.Length;
int status = Card.SCardTransmit(hCard, ref request, ref sendBytes[0], sendBytes.Length, ref request, ref receivedUID[0], ref receivedBytesLength);
if (status == Card.SCARD_S_SUCCESS)
{
if (receivedBytesLength >= 2)
{
// do we have an error
if ((receivedUID[receivedBytesLength - 2] != 0x90) ||
(receivedUID[receivedBytesLength - 1] != 0x00))
{
throw new Exception(receivedUID[receivedBytesLength - 2].ToString());
}
else if (receivedBytesLength > 2)
{
for (UInt32 i = 0; i != receivedBytesLength - 2; i++)
{
UID <<= 8;
UID |= (UInt64)(receivedUID[i]);
};
}
}
else
{
throw new Exception(ResourceHandling.getTextResource("Error_Card_Read"));
}
}
return UID;
}
If you need the UID in hex, then use this (in addition to above code):
public static string getCardUIDasHex() // *** only for mifare 1k cards ***
{
UInt64 cardUID = getCardUIDasUInt64();
return string.Format("{0:X}", cardUID);
}
Maybe this is also of help to someone else as in the internet (also here in SO) there are many places which just read out the 1st four bytes of the UID which is just not correct anymore today.
If the below code is compiled with UNICODE as compiler option, the GetComputerNameEx API returns junk characters.
Whereas if compiled without UNICODE option, the API returns truncated value of the hostname.
This issue is mostly seen with Asia-Pacific languages like Chinese, Japanese, Korean to name a few (i.e., non-English).
Can anyone throw some light on how this issue can be resolved.
# define INFO_SIZE 30
int main()
{
int ret;
TCHAR infoBuf[INFO_SIZE+1];
DWORD bufSize = (INFO_SIZE+1);
char *buf;
buf = (char *) malloc(INFO_SIZE+1);
if (!GetComputerNameEx((COMPUTER_NAME_FORMAT)1,
(LPTSTR)infoBuf, &bufSize))
{
printf("GetComputerNameEx failed (%d)\n", GetLastError());
return -1;
}
ret = wcstombs(buf, infoBuf, (INFO_SIZE+1));
buf[INFO_SIZE] = '\0';
return 0;
}
In the languages you mentioned, most characters are represented by more than one byte. This is because these languages have alphabets of much more than 256 characters. So you may need more than 30 bytes to encode 30 characters.
The usual pattern for calling a function like wcstombs goes like this: first get the amount of bytes required, then allocate a buffer, then convert the string.
(edit: that actually relies on a POSIX extension, which also got implemented on Windows)
size_t size = wcstombs(NULL, infoBuf, 0);
if (size == (size_t) -1) {
// some character can't be converted
}
char *buf = new char[size + 1];
size = wcstombs(buf, infoBuf, size + 1);
I have a problem in regards of extracting signed int from string in c++.
Assuming that i have a string of images1234, how can i extract the 1234 from the string without knowing the position of the last non numeric character in C++.
FYI, i have try stringstream as well as lexical_cast as suggested by others through the post but stringstream returns 0 while lexical_cast stopped working.
int main()
{
string virtuallive("Images1234");
//stringstream output(virtuallive.c_str());
//int i = stoi(virtuallive);
//stringstream output(virtuallive);
int i;
i = boost::lexical_cast<int>(virtuallive.c_str());
//output >> i;
cout << i << endl;
return 0;
}
How can i extract the 1234 from the string without knowing the position of the last non numeric character in C++?
You can't. But the position is not hard to find:
auto last_non_numeric = input.find_last_not_of("1234567890");
char* endp = &input[0];
if (last_non_numeric != std::string::npos)
endp += last_non_numeric + 1;
if (*endp) { /* FAILURE, no number on the end */ }
auto i = strtol(endp, &endp, 10);
if (*endp) {/* weird FAILURE, maybe the number was really HUGE and couldn't convert */}
Another possibility would be to put the string into a stringstream, then read the number from the stream (after imbuing the stream with a locale that classifies everything except digits as white space).
// First the desired facet:
struct digits_only: std::ctype<char> {
digits_only(): std::ctype<char>(get_table()) {}
static std::ctype_base::mask const* get_table() {
// everything is white-space:
static std::vector<std::ctype_base::mask>
rc(std::ctype<char>::table_size,std::ctype_base::space);
// except digits, which are digits
std::fill(&rc['0'], &rc['9'], std::ctype_base::digit);
// and '.', which we'll call punctuation:
rc['.'] = std::ctype_base::punct;
return &rc[0];
}
};
Then the code to read the data:
std::istringstream virtuallive("Images1234");
virtuallive.imbue(locale(locale(), new digits_only);
int number;
// Since we classify the letters as white space, the stream will ignore them.
// We can just read the number as if nothing else were there:
virtuallive >> number;
This technique is useful primarily when the stream contains a substantial amount of data, and you want all the data in that stream to be interpreted in the same way (e.g., only read numbers, regardless of what else it might contain).
I found this code below on the internet which is suppose to count the sentences on an 8051 MCU.
Can someone please explain to me what is exactly happening where there are question marks.
Any kind of help would be highly appreciated.
#include<string.h>
char code *text=" what is a program? that has, a a lot of errors! When " ;
char code *text1=" you compile. this file, uVision. reports a number of? ";
char code *text2=" problems that you! may interactively correct. " ; //Null characters are also included in array!!!
void count ( char pdata* , char pdata*);
void main (void){
char pdata Nw,Ns;
char data TextNw[2],TextNs[2];
count(&Nw, &Ns); // call subroutine
TextNw[0]=Nw/10; //?????????????????????????????????
TextNw[1]=Nw%10; //?????????????????????????????????
TextNs[0]=Ns/10; //?????????????????????????????????
TextNs[1]=Ns%10; //?????????????????????????????????
while(1);
}
void count ( char pdata *Nw, char pdata *Ns ){
unsigned char N, i, ch;
typedef enum {idle1, idle2} state; //?????????????????????????????????
state S; // begining state
P2=0x00; // pdata bank definition it must be performed first!!
*Ns=*Nw=0; // without proper start-up there is no initialisation, initialise now!!
S=idle1; // beginning state
N=strlen(text)+strlen(text1)+strlen(text2)+3; //????????????? + 3 to acount 3 Null characters!
P2=0x00; // pdata bank definition
for(i=0;i!=N;i++){
ch=text[i]; // take a caharacter from the text
switch (S)
{
case (idle1):{
if (ch==0) break; // skip NULL terminating character!
if (ch!=' '){
S=idle2;
(*Nw)++;
}
break;
}
case(idle2):{
if (ch==0) break; // skip NULL terminating character!
if((ch==' ')||(ch==','))S=idle1;
else if ((ch=='?')||(ch=='.')||(ch=='!')){
S=idle1;
(*Ns)++;
}
break;
}
}
}
}
This program does 2 things in conjunction - counts number of sentences in the text and counts the number of words in the text. Once the counting is done, the results are stored in 2-char arrays. For example, for 57 words in 3 sentences the results will be stored as this: TextNw = {'5','7'} and TextNs = {'0','3'}.
The variable N contains the full length of the text with the addition of 3 null terminating characters (one per sentence).
The algorithm simultaneously counts words and sentences. In idle1 state the counting is in word-counting mode. In idle2 state the counting is in sentence-counting mode. The modes are interchanged according to current character being read - if delimiter is encountered, the appropriate counter is increased.
I am using a toolkit to do some Elliptical Curve Cryptography on an ATMega2560. When trying to use the print functions in the toolkit I am getting an empty string. I know the print functions work because the x86 version prints the variables without a problem. I am not experienced with ATMega and would love any help on this matter. The print code is included below.
Code to print a big number (it itself calls a util_print)
void bn_print(bn_t a) {
int i;
if (a->sign == BN_NEG) {
util_print("-");
}
if (a->used == 0) {
util_print("0\n");
} else {
#if WORD == 64
util_print("%lX", (unsigned long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*lX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long int)a->dp[i]);
}
#else
util_print("%llX", (unsigned long long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*llX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long long int)a->dp[i]);
}
#endif
util_print("\n");
}
}
The code to actually print a big number variable:
static char buffer[64 + 1];
void util_printf(char *format, ...) {
#ifndef QUIET
#if ARCH == AVR
char *pointer = &buffer[1];
va_list list;
va_start(list, format);
vsnprintf(pointer, 128, format, list);
buffer[0] = (unsigned char)2;
va_end(list);
#elif ARCH == MSP
va_list list;
va_start(list, format);
vprintf(format, list);
va_end(list);
#else
va_list list;
va_start(list, format);
vprintf(format, list);
fflush(stdout);
va_end(list);
#endif
#endif
}
edit: I do have UART initialized and can output printf statments to a console.
I'm one of the authors of the RELIC toolkit. The current util_printf() function is used to print inside the Avrora simulator, for debugging purposes. I'm glad that you could adapt the code to your purposes. As a side note, the buffer size problem was already fixed in more recent releases of the toolkit.
Let me know you have further problems with the library. You can either contact me personally or write directly to the discussion group.
Thank you!
vsnprintf store it's output on the given buffer (which in this case is the address point by pointer variable), in order for it to show on the console (through UART) you must send your buffer using printf (try to add printf("%s", pointer) after vsnprintf), if you're using avr-libc don't forget to initialized std stream first before making any call to printf function
oh btw your code is vulnerable to buffer overflow attack, buffer[64 + 1] means your buffer size is only 65 bytes, vsnprintf(pointer, 128, format, list); means that the maximum buffer defined by your application is 128 bytes, try to change it below 65 bytes in order to avoid overflow
Alright so I found a workaround to print the bn numbers to a stdout on an ATMega2560. The toolkit comes with a function that writes a variable to a string (bn_write_str). So I implemented my own print function as such:
void print_bn(bn_t a)
{
char print[BN_SIZE]; // max precision of a bn number
int bi = bn_bits(a); // get the number of bits of the number
bn_write_str(print, bi, a, 16) // 16 indicates the radix (hexadecimal)
printf("%s\n"), print);
}
This function will print a bn number in hexadecimal format.
Hope this helps anyone using the RELIC toolkit with an AVR.
This skips the util_print calls.