Our team develop POS solution for NFC cards on Ingenico devices.
What we use to read the card:
/* Open the MIFARE driver */
int ClessMifare_OpenDriver (void);
Return value: OK
/*Wait until a MIFARE contactless card is detected*/
int ClessMifare_DetectCardsEx (unsigned char nKindOfCard, unsigned int *pNumOfCards, unsigned int nTimeout);
Return value: OK
/*Retrieve the type of the MIFARE card and its UID */
int ClessMifare_GetUid (unsigned char nCardIndex, unsigned char *pKindOfCard, unsigned char *pUidLength, unsigned char *pUid);
Return Value:
Paramater2:
pKindOfCard(Type of cards)
Card1: CL_B_UNDEFINED
Card2: CL_B_UNDEFINED
Card3: CL_B_UNDEFINED
Card4: CL_MF_CLASSIC
Paramater4: pUid ( UID of the card)
Card1: "\004Br\302\3278\200"
Card2: "\004\333\354y\342\002\200"
Card3: "\004s\247B\344?\201"
Card4: "\016\310d\301"
But in real life we expect:
Card1 044272c2d73880
Card2 0ec864c1
Card3 0473a742e43f81
Card4 04dbec79e20280
From Android NFC readers we get correct numbers, but from POS its quite different as a output from Ingenico POS. What we need to do to get this number in hex?
Thanks!
You are actually seeing the right UIDs here. There is just a representation issue you are not expecting. Return values you are quoting are C strings with octal escaping for non-printable characters. \nnn is octal representation of a byte.
In the value "\004s\247B\344?\201", you have \004, byte of value 0x04, followed by printable character s, of value 0x73, followed by \247, value 0xa7, etc.
You can convert to hex for debugging with python for example:
$ python2
>>> import binascii
>>> binascii.b2a_hex("\004Br\302\3278\200")
'044272c2d73880'
>>> binascii.b2a_hex("\004\333\354y\342\002\200")
'04dbec79e20280'
>>> binascii.b2a_hex("\004s\247B\344?\201")
'0473a742e43f81'
>>> binascii.b2a_hex("\016\310d\301")
'0ec864c1'
But overall, data is here.
Related
For example I by convention null terminate a buffer (set buffer equal to zero) the following way, example 1:
char buffer[1024] = {0};
And with the windows.h library we can call ZeroMemory, example 2:
char buffer[1024];
ZeroMemory(buffer, sizeof(buffer));
According to the documentation provided by microsoft: ZeroMemory Fills a block of memory with zeros. I want to be accurate in my windows application so I thought what better place to ask than stack overflow.
Are these two examples equivalent in logic?
Yes, the two codes are equivalent. The entire array is filled with zeros in both cases.
In the case of char buffer[1024] = {0};, you are explicitly setting only the first char element to 0, and then the compiler implicitly value-initializes the remaining 1023 char elements to 0 for you.
In C++11 and later, you can omit that first element value:
char buffer[1024] = {};
char buffer[1024]{};
I am in 4-day fight with this code:
unsigned long baudrate = 0;
unsigned char databits = 0;
unsigned char stop_bits = 0;
char parity_text[10];
char flowctrl_text[4];
const char xformat[] = "%lu,%hhu,%hhu,%[^,],%[^,]\n";
const char xtext[] = "115200,8,1,EVEN,NFC\n";
int res = sscanf(xtext, xformat, &baudrate, &databits, &stop_bits, (char*) &parity_text, (char*) &flowctrl_text);
printf("Res: %d\r\n", res);
printf("baudrate: %lu, databits: %hhu, stop: %hhu, \r\n", baudrate, databits, stop_bits);
printf("parity: %s \r\n", parity_text);
printf("flowctrl: %s \r\n", flowctrl_text);
It returns:
Res: 5
baudrate: 115200, databits: 0, stop: 1,
parity:
flowctrl: NFC
Databits and parity missing !
Actually memory under the parity variable is '\0'VEN'\0',
looks like the first characters was somehow overwritten by sscanf procedure.
Return value of sscanf is 5, which suggests, that it was able to parse the input.
My configuration:
gccarmnoneeabi 7.2.1
Visual Studio Code 1.43.2
PlatformIO Core 4.3.1
PlatformIO Home 3.1.1
Lib ST-STM 6.0.0 (Mbed 5.14.1)
STM32F446RE (Nucleo-F446RE)
I have tried (without success):
compiling with mbed RTOS and without
variable types uint8_t, uint32_t
gccarm versions: 6.3.1, 8.3.1, 9.2.1
using another IDE (CLion+PlatformIO)
compiling on another computer (same config)
What actually helps:
making the variables static
compiling in Mbed online compiler
The behavior of sscanf is as whole very unpredictable, mixing the order or datatype of variables sometimes helps, but most often ends with another flaws in the output.
This took me longer than I care to admit. But like most issues it ended up being very simple.
char parity_text[10];
char flowctrl_text[4];
Needs to be changed to:
char parity_text[10] = {0};
char flowctrl_text[5] = {0};
The flowctrl_text array is not large enough at size four to hold "EVEN" and the NULL termination. If you bump it to a size of 5 you should have no problem. Just to be safe I would also initialize the arrays to 0.
Once I increased the size I had 0 issues with your existing code. Let me know if this helps.
I have not been able to find a reliable solution for my problem, what i'm simply trying to do is create some function which:
takes an rows and columns position in the terminal.
calls mvinch(window_object , rows, cols), which returns an unsigned int which corresponds to the character in the terminal at that position.
returns the ascii character associated with that unsigned int, effectively casting it back to a char.
Here is an example of my code in c++11:
char Kmenu::getChrfromW(size_t const y, size_t const x,
bool const save_cursor) const {
size_t curr_y, curr_x;
getyx(_win, curr_y, curr_x);
char ich = mvwinch(_win, y, x);
char ch = ich;
if (save_cursor)
wmove(_win, curr_y, curr_x);
return ch;
}
If for example the character in the terminal at position 2,3 is the letter 'a', i want this function to return the letter 'a'.
I tried the solution described here:
Convert ASCII number to ASCII Character in C
which effectively casts an integer as char.
unfortunately what i get back is still the integer: testing with a screen filled with 'w's, i get back the integer 119.
the man page for the curses function mvwinch() describes the function to return chtype, which the compiler recognises as unsigned int.
Is there a built in a curses function which gives the char back directly without casting to unsigned int, or some other way i can achieve this?
Edit: ch to ich, as in the actual code
A chtype contains a character along with other data. The curses.h header has several symbols which are useful for extracting those bits. If you mask it with A_CHARTEXT and cast that to a char, you will get a character:
char c = (char)((A_CHARTEXT) & n);
Your example should not compile, since it declares ch twice. You may have meant this:
char Kmenu::getChrfromW(size_t const y, size_t const x,
bool const save_cursor) const {
int curr_y, curr_x; // size_t is inappropriate...
getyx(_win, curr_y, curr_x);
char ch = (char)((A_CHARTEXT) & mvwinch(_win, y, x));
// char ch = ich;
if (save_cursor)
wmove(_win, curr_y, curr_x);
return ch;
}
The manual page for mvwinch mentions the A_CHARTEXT mask in the Attributes section, assuming the reader is familiar with things like that:
The following bit-masks may be AND-ed with characters returned by
winch.
A_CHARTEXT Bit-mask to extract character
A_ATTRIBUTES Bit-mask to extract attributes
A_COLOR Bit-mask to extract color-pair field information
I'm trying to format some utf-8 encoded strings in C code (char *) using the printf function. I need to specify a length in format. Everything goes well when there are no multi-bytes characters in parameter string, but the result seems to be incorrect when there are some multibyte chars in data.
my glibc is kind of old (2.17), so I tried with some online compilers and result is the same.
#include <stdlib.h>
#include <locale.h>
int main(void)
{
setlocale( LC_CTYPE, "en_US.UTF-8" );
setlocale( LC_COLLATE, "en_US.UTF-8" );
printf( "'%-4.4s'\n", "elephant" );
printf( "'%-4.4s'\n", "éléphant" );
printf( "'%-20.20s'\n", "éléphant" );
return 0;
}
Result of execution is :
'elep'
'él�'
'éléphant '
First line is correct (4 chars in output)
Second line is obviously wrong (at least from a human point of view)
Last line is also wrong : only 18 unicode chars are written instead of 20
It seems that the printf function count chars before UTF-8 decoding (counting bytes instead of unicode chars)
Is that a bug in glibc or a well documented limitation of printf ?
It's true that printf counts bytes, not multibyte characters. If it's a bug, the bug is in the C standard, not in glibc (the standard library implementation usually used in conjunction with gcc).
In fairness, counting characters wouldn't help you align unicode output either, because unicode characters are not all the same display width even with fixed-width fonts. (Many codepoints are width 0, for example.)
I'm not going to attempt to argue that this behaviour is "well-documented". Standard C's locale facilities have never been particularly adequate to the task, imho, and they have never been particularly well documented, in part because the underlying model attempts to encompass so many possible encodings without ever grounding itself in a concrete example that it is almost impossible to explain. (...Long rant deleted...)
You can use the wchar.h formatted output functions,
which count in wide characters. (Which still isn't going to give you correct output alignment but it will count precision the way you expect.)
Let me quote rici: It's true that printf counts bytes, not multibyte characters. If it's a bug, the bug is in the C standard, not in glibc (the standard library implementation usually used in conjunction with gcc).
However, don't conflate wchar_t and UTF-8. See wikipedia to grasp the sense of the former. UTF-8, instead, can be dealt with almost as if it were good old ASCII. Just avoid truncating in the middle of a character.
In order to get alignment, you want to count characters. Then, pass the bytes count to printf. That can be achieved by using the * precision and passing the count of bytes. For example, since accented e takes two bytes:
printf("'-4.*s'\n", 6, "éléphant");
A function to count bytes is easily coded based on the format of UTF-8 characters:
static int count_bytes(char const *utf8_string, int length)
{
char const *s = utf8_string;
for (;;)
{
int ch = *(unsigned char *)s++;
if ((ch & 0xc0) == 0xc0) // first byte of a multi-byte UTF-8
while (((ch = *(unsigned char*)s) & 0xc0) == 0x80)
++s;
if (ch == 0)
break;
if (--length <= 0)
break;
}
return s - utf8_string;
}
At this point however, one would end up with lines like so:
printf("'-4.*s'\n", count_bytes("éléphant", 4), "éléphant");
Having to repeat the string twice quickly becomes a maintenance nightmare. At a minimum, one can define a macro to make sure the string is the same. Assuming the above function is saved in some utf8-util.h file, your program could be rewritten as follows:
#include <stdio.h>
#include <stdlib.h>
#include <locale.h>
#include "utf8-util.h"
#define INT_STR_PAIR(i, s) count_bytes(s, i), s
int main(void)
{
setlocale( LC_CTYPE, "en_US.UTF-8" );
setlocale( LC_COLLATE, "en_US.UTF-8" );
printf( "'%-4.*s'\n", INT_STR_PAIR(4, "elephant"));
printf( "'%-4.*s'\n", INT_STR_PAIR(4, "éléphant"));
printf( "'%-4.*s'\n", INT_STR_PAIR(4, "é𐅫éphant"));
printf( "'%-20.*s'\n", INT_STR_PAIR(20, "éléphant"));
return 0;
}
The last but one test uses 𐅫, the Greek acrophonic thespian three hundred (U+1016B) character. Given how the counting works, testing with consecutive non-ASCII characters makes sense. The ancient Greek character looks "wide" enough to see how much space it takes using a fixed-width font. The output may look like:
'elep'
'élép'
'é𐅫ép'
'éléphant '
(On my terminal, those 4-char strings are of equal length.)
I'm running a number of SNMP queries to a Hytera dmr repeater. However, the SNMP object definition looks like this:
rptVswr OBJECT-TYPE
SYNTAX OCTET STRING(SIZE(4))
MAX-ACCESS read-only
STATUS mandatory
DESCRIPTION
"The VSWR.
It should be changed to float format. "
-- 1.3.6.1.4.1.40297.1.2.1.2.4
::= { rptDataInfo 4 }
After running the query, I got an result like this:
Name/OID: rptVswr.0;
Value (OctetString): 0x76 D5 8B 3F
Does anyone have an idea how to convert that string into a readable format?
It should be something like this : 1.15 or 2.15
Many thanks for your help,
BR - Nils
Here is pretty simple C++ app that decodes hex data and converts it to float:
#include <iostream>
#include <algorithm>
using namespace std;
int main()
{
unsigned char ptr[] = {0x76, 0xD5, 0x8B, 0x3F};
reverse(ptr, ptr + 4);
float f = *reinterpret_cast<float*>(ptr);
cout << f << endl;
return 0;
}
The result is 2.16559e+33
My experience with RF devices is that the SNMP replies are either in decimal or hex format and represent power in mW. If you take your get response 0x76 D5 8B 3F, and convert hex to decimal, you get 1,993,706,303 mW. This translates to 1.9937 kW. For VSWR, this is an accurate and acceptable measurement if your forward power is 2+ MW.