static_cast use to convert int to char - char

I have written this code to convert Decimal to binary:
string Solution::findDigitsInBinary(int A) {
if(A == 0 )
return "0" ;
else
{
string bin = "";
while(A > 0)
{
int rem = (A % 2);
bin.push_back(static_cast<char>(A % 2));
A = A/2 ;
}
reverse(bin.begin(),bin.end()) ;
return bin ;
}
}
But not getting the desired result using static_cast.
I have seen something related to this that is giving the desired result :
(char)('0'+ rem).
What's the difference between static_cast? why I am not getting the correct binary output?

With:
(char) '0' + rem;
The important difference is not the cast, but that the remainder, which always results in 0 or 1, is added to the character '0', which means that you adding a character of '0' or '1' to your string.
In your version you are adding either the integer representation of 0 or 1, but the string representations of 0 and 1 are either 48 or 49. By adding the remainder of 0 or 1 to '0' it gives a value of either 48 (character 0) or 49 (character 1).
If you do the same thing in your code it will also work.
string findDigitsInBinary(int A) {
if (A == 0)
return "0";
else
{
string bin = "";
while (A > 0)
{
int rem = (A % 2);
bin.push_back(static_cast<char>(A % 2 + '0')); // Remainder + '0'
A = A / 2;
}
reverse(bin.begin(), bin.end());
return bin;
}
Basically you should be adding characters to the string, and not numbers. So you shouldn't be adding 0 and 1 to the string, you should be adding the numbers 48 (character 0) and 49 (character 1).
This chart might illustrate better. See how the character value/digit '0' is 48 in decimal? Let's just say you wanted to add the digit 4 to the string, then because decimal 48 is 0, then you would actually want to add the decimal value of 52 to the string, 48 + 4. This is what the '0' + rem does. This is done automatically for you if you insert a character, that is, if you do:
mystring += 'A';
It will add an 'A' character to your string, but what it's actually doing in reality is converting that 'A' to decimal 65 and adding it to the string. What you have in your code is you're adding decimal numbers/integers 0 and 1, and these aren't characters in the Unicode/ASCII representation.
Now that you understand how characters are encoded, to cast an integer to a char does not change the decimal/integer to its character representation, but it changes the data type from int to char, a 4-byte data type (most likely) to a 1-byte data type. Your cast did the following:
After the modulo % operation you got a result of either 1 or 0 as an integer, let's just say you got a 1 remainder, it would look like this as an int:
00000000 00000000 00000000 00000001
After the cast to a char it would convert it to a one-byte data type, which would make it look like this:
00000001 // Now it's a one-byte data type
Whereas what a '1' digit looks like encoded as a string character is 49, which looks like this:
00110000
As for the difference between static_cast and c-style cast, the static_cast does compile-time checks and allows casts between certain types based on particular rules, whereas a c-style cast isn't as restrictive.
char a = 5;
int* p = static_cast<int*>(&a); // Will not compile
int* p2 = (int*)&a; // Will compile and run, but is discouraged as there are risks.
*p2 = 7; // You've written past the single byte char into 3 extra bytes, which is an access violation, or undefined behaviour.

Related

How to coerce math.Inf to an integer?

I've got some code I'm using to do comparisons, and I want to start with infinite values. Here's a snippet of my code.
import (
"fmt"
"math"
)
func snippet(arr []int) {
least := int(math.Inf(1))
greatest := int(math.Inf(-1))
fmt.Println("least", math.Inf(1), least)
fmt.Println("greatest", math.Inf(-1), greatest)
}
and here's the output I get from the console
least +Inf -9223372036854775808
greatest -Inf -9223372036854775808
why is +Inf coerced into a negative int ?
Infinity is not representable by int.
According to the go spec,
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
Maybe you are looking for the largest representable int? How to get it is explained here.
math.Inf() returns an IEEE double-precision float representing positive infinity if the sign of the argument is >= 0, and negative infinity if the sign is < 0, so your code is incorrect.
But, the Go language specifiction (always good to read the specifications) says this:
Conversions between numeric types
.
.
.
In all non-constant conversions involving floating-point or complex values,
if the result type cannot represent the value the conversion succeeds but
the result value is implementation-dependent.
Two's complement integer values don't have the concept of infinity, so the result is implementation dependent.
Myself, I'd have expected to get the largest or smallest integer value for the integer type the cast is targeting, but apparently that's not the case.
This looks to the runtime source file responsible for the conversion, https://go.dev/src/runtime/softfloat64.go
And this is the actual source code.
Note that an IEEE-754 double-precision float is a 64-bit double word, consisting of
a sign bit, the high-order (most significant/leftmost bit), 0 indicating positive, 1 indicating negative.
an exponent (biased), consisting of the next 11 bits, and
a mantissa, consisting of the remaining 52 bits, which can be denormalized.
Positive Infinity is a special value with a sign bit of 0, a exponent of all 1 bits, and a mantissa of all 0 bits:
0 11111111111 0000000000000000000000000000000000000000000000000000
or 0x7FF0000000000000.
Negative infinity is the same, with the exception that the sign bit is 1:
1 11111111111 0000000000000000000000000000000000000000000000000000
or 0xFFF0000000000000.
Looks like `funpack64() returns 5 values:
a uint64 representing the sign (0 or the very large non-zero value 0x8000000000000000),
a uint64 representing the normalized mantissa,
an int representing the exponent,
a bool indicating whether or not this is +/- infinity, and
a bool indicating whether or not this is NaN.
From that, you should be able to figure out why it returns the value it does.
[Frankly, I'm surprised that f64toint() doesn't short-circuit when funpack64() returns fi = true.]
const mantbits64 uint = 52
const expbits64 uint = 11
const bias64 = -1<<(expbits64-1) + 1
func f64toint(f uint64) (val int64, ok bool) {
fs, fm, fe, fi, fn := funpack64(f)
switch {
case fi, fn: // NaN
return 0, false
case fe < -1: // f < 0.5
return 0, false
case fe > 63: // f >= 2^63
if fs != 0 && fm == 0 { // f == -2^63
return -1 << 63, true
}
if fs != 0 {
return 0, false
}
return 0, false
}
for fe > int(mantbits64) {
fe--
fm <<= 1
}
for fe < int(mantbits64) {
fe++
fm >>= 1
}
val = int64(fm)
if fs != 0 {
val = -val
}
return val, true
}
func funpack64(f uint64) (sign, mant uint64, exp int, inf, nan bool) {
sign = f & (1 << (mantbits64 + expbits64))
mant = f & (1<<mantbits64 - 1)
exp = int(f>>mantbits64) & (1<<expbits64 - 1)
switch exp {
case 1<<expbits64 - 1:
if mant != 0 {
nan = true
return
}
inf = true
return
case 0:
// denormalized
if mant != 0 {
exp += bias64 + 1
for mant < 1<<mantbits64 {
mant <<= 1
exp--
}
}
default:
// add implicit top bit
mant |= 1 << mantbits64
exp += bias64
}
return
}

Finding the formula for an alphanumeric code

A script I am making scans a 5-character code and assigns it a number based on the contents of characters within the code. The code is a randomly-generated number/letter combination. For example 7D3B5 or HH42B where any position can be any one of (26 + 10) characters.
Now, the issue I am having is I would like to figure out the number from 1-(36^5) based on the code. For example:
00000 = 0
00001 = 1
00002 = 2
0000A = 10
0000B = 11
0000Z = 36
00010 = 37
00011 = 38
So on and so forth until the final possible code which is:
ZZZZZ = 60466176 (36^5)
What I need to work out is a formula to figure out, let's say G47DU in its number form, using the examples below.
Something like this?
function getCount(s){
if (!isNaN(s))
return Number(s);
return s.charCodeAt(0) - 55;
}
function f(str){
let result = 0;
for (let i=0; i<str.length; i++)
result += Math.pow(36, str.length - i - 1) * getCount(str[i]);
return result;
}
var strs = [
'00000',
'00001',
'00002',
'0000A',
'0000B',
'0000Z',
'00010',
'00011',
'ZZZZZ'
];
for (str of strs)
console.log(str, f(str));
You are trying to create a base 36 numeric system. Since there are 5 'digits' each digit being 0 to Z, the value can go from 0 to 36^5. (If we are comparing this with hexadecimal system, in hexadecimal each 'digit' goes from 0 to F). Now to convert this to decimal, you could try use the same method used to convert from hex or binary etc... system to the decimal system.
It will be something like d4 * (36 ^ 4) + d3 * (36 ^ 3) + d2 * (36 ^ 2) + d1 * (36 ^ 1) + d0 * (36 ^ 0)
Note: Here 36 is the total number of symbols.
d0, d1, d2, d3, d4 can range from 0 to 35 in decimal (Important: Not 0 to 36).
Also, you can extend this for any number of digits or symbols and you can implement operations like addition, subtraction etc in this system itself as well. (It will be fun to implement that. :) ) But it will be easier to convert it to decimal do the operations and convert it back though.

Converting Number to Binary String

The following is the code to convert a number to binary string. Can anyone tell me how ans.push_back((char)('0' + rem)) works?
class Solution {
public:
string findDigitsInBinary(int n) {
string ans;
if (n == 0) return "0";
while (n > 0) {
int rem = n % 2;
ans.push_back((char)('0' + rem));
n /= 2;
}
reverse(ans.begin(), ans.end());
return ans;
}
};
To understand it, you just need to know that you can do arithmetic operations on char variables too. So, the simple loop below is valid and will print 0123456789.
for(char c = '0'; c <= '9'; ++c)
cout << c;
In you code, rem is either 0 or 1. So, (char)('0'+rem) is either '0' or '1' as desired, corresponding to rem=0, 1, respectively.
while (n > 0) {
int rem = n % 2;
ans.push_back((char)('0' + rem));
n /= 2;
}
Focus on this loop. Suppose n is 5
n > 0 true so enter into loop. rem = n % 2 so rem = 5 % 2 = 1
ans.push_back((char)('0' + rem)) here ('0' + rem) is (48 + 1) ASCII of '0' is 48
Now convert 48 + 1 = 49 into char that is '1'. Now push '1' into ansand then n /= 2 is 5 /= 2 that is 2. Now go back and check the condition in while loop. After loop reverse the content of ans and you have binary string of number n
First you get rem as %2. Thus the value of rem can either be 0 or 1.
In ans.push_back((char)('0' + rem)); you need to add the corresponding character to the string, that is either 0 or 1. For this you have considered '0' as base character and you simply add the rem to it, using its ASCII int. When doing such integer operation, the ASCII value of character '0' is considered, which is 48. Thus after adding rem to it, it can either be 48 + 0 = 48 or 48 + 1 = 49.
Finally, this value is type casted back to char, with 48 being '0' and 49 being '1'

SIMPLE-TLV vs BER-TLV

I have found in docs they are referring to SIMPLE-TLV and BER-TLV . I was look into most of the EMV and GP docs but they have not mentioned the different.
Could anyone help me to understand the difference of two ?
Data fields in ISO/IEC 7816-4 for smart cards
BER encoding
This is the specification of the more common BER encoding used by ISO/IEC 7816-4:
Each BER-TLV data object shall consists of 2 or 3 consecutive fields
(see ISO/IEC 8825 and annex D).
The tag field T consists of one or more consecutive bytes. It encodes
a class, a type and a number. The length field consists of one or more
consecutive bytes. It encodes an integer L. If L is not null, then the
value field V consists of L consecutive bytes. If L is null, then the
data object is empty: there is no value field.
Note that ISO/IEC 7816 only allows the use of up to 5 length bytes (specifying a size up to 2^32 - 1 bytes) in the current standard. Indefinite length encoding is not supported either. These limitations are specific to smart cards. Note that 4 and 5 byte length encodings were introduced in a later version of ISO/IEC 7816-4; earlier cards / card reading applications may only support 3 length bytes (i.e. a value size up to 64KiB bytes, instead of 4GiB).
The BER TLV specification is much more expansive (which is why SIMPLE-TLV is called "simple"). I won't go into the details too much as there is plenty of information available on the internet. To name just a few differences, the tags have syntactical meaning and may consist of multiple bytes and the length encoding is rather complex.
Normally BER should only be used as an encoding of ASN.1 structures, with the ASN.1 syntax defining the structure. ISO 7816-4 however messes this up and only specifies the BER tag bytes directly.
Note that sometimes DER is specified instead of BER. In that case you should only use the minimum number of bytes for the size of the length field - e.g. a single length byte with value 05 in the samples below. The ISO/IEC specification of BER encoding is basically a copy of the US specific X.690 standard, also reflected in the international standard ISO/IEC 8825-1 (both payware).
SIMPLE-TLV encoding
The BER specification in ISO/IEC 7816-4 is followed by the SIMPLE-TLV specification. SIMPLE-TLV is specific to ISO 7816-4.
Each SIMPLE-TLV data object shall consist of 2 or 3 consecutive
fields.
The tag field T consists of a single byte encoding only a number from
1 to 254 (e.g. a record identifier). It codes no class and no
construction-type. The length field consists of 1 or 3 consecutive
bytes. If the leading byte of the length field is in the range from
'00' to 'FE', then the length field consists of a single byte encoding
an integer L valued from 0 to 254. If the leading byte is equal to
'FF', then the length field continues on the two subsequent bytes
which encode an integer L with a value from 0 to 65535. If L in not
null, then the value field V consists of consecutive bytes. If L is
null, then the data object is empty: there is no value field.
Note that the standard forgets to specify the endianness directly. You can however assume big endian encoding within ISO/IEC 7816-4.
Samples
The following samples are all used to convey the same tag number (which defines the field) and value, except one that defines tag number 31 for BER.
Sample SIMPLE-TLV
0F 05 48656C6C6F // tag number 15, length 5 then the value
0F FF0005 48656C6C6F // tag number 15, length 5 (two bytes), then the value
Sample BER-TLV:
4F 05 48656C6C6F // *application specific*, primitive encoding of tag number 15, length 5 then the value
4F 8105 48656C6C6F // the same, using two bytes to encode the length
4F 820005 48656C6C6F // the same, using three bytes to encode the length
4F 83000005 48656C6C6F // the same, using four bytes to encode the length
4F 8400000005 48656C6C6F // the same , using five bytes to encode the length
5F0F 05 48656C6C6F // **invalid** encoding of the same, with two bytes for the tag, specifiying a tag number 15 which is smaller than 31
5F1F 05 48656C6C6F // application specific, primitive encoding of **tag number 31**
In the last example with the two byte tag encoding, the first byte is 40 hex, where the first 3 leftmost bits 010 specify application specific encoding, adding the magic value 1F (31) to it to indicate that another byte will follow with the actual tag number, again 1F, so value 31.
Differences
The following differences should be noted:
SIMPLE-TLV is a different method of encoding for tag and length (although the encoding may look similar, e.g. when using a single byte to indicate the length part)
SIMPLE-TLV does not contain information about the class of the field, e.g. if it is defined for ASN.1 (because it is not linked to ASN.1)
SIMPLE-TLV does not contain information if it is primitive or constructed (primitive directly specifies a value, constructed means nested TLV structures)
SIMPLE-TLV has restrictions regarding the tag number (between 1 and 254, inclusive) and length (up to 65535)
Simple TLV simply consists of Tag (or Type), Length, and Value.
The BER-TLV is a special TLV which has one or more TLV inside its Value. So it has composite structure.
Tag1 Len1 Tag2-Len2-Value2 Tag3-Len3-Value3 ... TagN-LenN-ValueN
------------------------Value1------------------------
[Example C# code for Bert-Tlv Parser][1]:
[1]https://github.com/umitkoc/BertTlv
public class Tlv : ITlv, IFile
{
List<TlvModel> modelList=new();
string parser = "";
string length = "";
String empty = "";
string ascii = "";
int decValue = 0;
int step = 0;
public Tlv(String data)
{
TlvParser(data.Replace(" ",""));
}
public void readTag()
{
String line = "";
StreamReader sr = new StreamReader("taglist.txt");
while ((line = sr.ReadLine()) != null)
{
modelList.Add(new()
{
tag = line.Split(",")[0].Trim(),
description = line.Split(",")[1].Trim()
});
}
sr.Close();
}
public void insertTag()
{
try
{
StreamWriter sw = new StreamWriter("test.txt");
foreach (var item in modelList)
{
sw.WriteLine($"{item.tag},{item.description}");
}
sw.Close();
}
catch (Exception e)
{
Console.WriteLine("Exception: " + e.Message);
}
}
public int writeFile(String parser)
{
StreamWriter sw = new StreamWriter("output.txt");
sw.WriteLine(parser);
sw.Close();
return 0;
}
private int TlvParser(String data, int i = 0, string tag = "")
{
if (i == 0)
{
readTag();
}
if (i < data.Length)
{
tag += data[i];
TlvModel model = getTag(tag);
if (model != null)
{
decValue = int.Parse(data.Substring(i + 1, 2), System.Globalization.NumberStyles.HexNumber);
// lengthControl(data,i+3,decValue);
if (model.description.Contains("Template"))
{
parser += $"{empty}|------ tag: {model.tag}({model.description})\n";
step += 1;
empty = Empty();
return TlvParser(data, i + 3, "");
}
else
{
parser += $"{empty}|------ tag: {model.tag}({model.description}){empty}|------ value --> {ConvertHex(data.Substring(i + 3, decValue * 2))} \n";
}
i += 3 + decValue * 2;
return TlvParser(data, i, "");
}
else
{
return TlvParser(data, i + 1, tag);
}
}
return writeFile(parser);
}
public TlvModel getTag(string tag)
{
return modelList.Find(i => i.tag == tag);
}
public string ConvertHex(string hex)
{
ascii = "";
for (int i = 0; i < hex.Length; i += 2)
{
ascii += System.Convert.ToChar(System.Convert.ToUInt32(hex.Substring(i, 2), 16));
}
return ascii;
}
private string Empty()
{
for (int s = 0; s < step; s++)
{
empty += "\t";
}
return empty;
}
public void setTag(TlvModel model)
{
modelList.Add(model);
insertTag();
}
}

Is it possible to convert any base to any base (range 2 to 46)

I know it is simple and possible to convert any base to any base. First, convert any base to decimal and then decimal to any other base. However, I had done this before for range 2 to 36 but never done for 2 to 46.
I don't understand what I will put after 36, because 36 means 'z' (1-10 are decimal numbers then the 26 characters of the alphabet).
Please explains what happens after 36.
Every base has a purpose. Usually we do base conversion to make complex computations simpler.
Here are some most popular bases used and their representation.
2-binary numeral system
used internally by nearly all computers, is base two. The two digits are 0 and 1, expressed from switches displaying OFF and ON respectively.
8-octal system
is occasionally used in computing. The eight digits are 0–7.
10-decimal system
the most used system of numbers in the world, is used in arithmetic. Its ten digits are 0–9.
12-duodecimal (dozenal) system
is often used due to divisibility by 2, 3, 4 and 6. It was traditionally used as part of quantities expressed in dozens and grosses.
16-hexadecimal system
is often used in computing. The sixteen digits are 0–9 followed by A–F.
60-sexagesimal system
originated in ancient Sumeria and passed to the Babylonians. It is still used as the basis of our modern circular coordinate system (degrees, minutes, and seconds) and time measuring (minutes and hours).
64-Base 64
is also occasionally used in computing, using as digits A–Z, a–z, 0–9, plus two more characters, often + and /.
256-bytes
is used internally by computers, actually grouping eight binary digits together. For reading by humans, bytes are usually shown in hexadecimal.
The octal, hexadecimal and base-64 systems are often used in computing because of their ease as shorthand for binary. For example, every hexadecimal digit has an equivalent 4 digit binary number.
Radices are usually natural numbers. However, other positional systems are possible, e.g. golden ratio base (whose radix is a non-integer algebraic number), and negative base (whose radix is negative).
Your doubt is whether we can convert any base to any other base after base exceeds 36
( # of Alphabets + # of digits = 26+ 10= 36)
Taking example of 64-Base
It uses A–Z(Upper case)(26), a–z(lower case)(26), 0–9(10), plus 2 more characters. This way the constraint of 36 is resolved.
As we have (26+26+10+2)64 symbols in 64-base for representation, we can represent any number in 64 base. Similarly for more base they use different symbols for representation.
Source: http://en.wikipedia.org/wiki/Radix
The symbols you use for digits are arbitrary. For example base64 encoding uses 'A' to represent the zero valued digit and '0' represents the digit with the value 52. In base64 the digits go through the alphabet A-Z, then the lower case alphabet a-z, then the traditional digits 0-9, and then usually '+' and '/'.
One base 60 system used these symbols:
So the symbols used are arbitrary. There's nothing that 'happens' after 36 except what you say happens for your system.
With number systems, you are allowed to play god.
Playing god
What you need to understand is, that symbols are completely arbitrary. There is no god-given rule for "what comes after 36". You are free to define whatever you like.
To encode numbers with a certain base, all you need is the following:
base-many distinct symbols
a total order on the symbols
An arbitrary example
Naturally, there's an infinite amount of possibilities to create such a symbol table for a certain base:
Θ
ェ
す
)
0
・
_
o
や
ι
You could use this, to encode numbers with base 10. Θ being the zero-element, ェ being the one, etc.
Conventions
Of course, your peers would not be too happy if you started using the above symbol table. Because the symbols are arbitrary, we need conventions. 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 is a convention, as are the symbols we use for hexadecimal, binary, etc. It is generally agreed upon what symbol table we use for what basis, that is why we can read the numbers someone else writes down.
The important thing to remember is that all numbers are symbolic of a value. Thus if you wanted to do that, you could just make a list containing the values at each position. After base 36, you simply run out of characters you can make a logical sequence out of. For example, if you used the Cambodian Alphabet with 70 odd characters, you could do base 80.
Here is the complete code I have written, hope this will help.
import java.util.Scanner;
/*
* author : roottraveller, nov 4th 2017
*/
public class BaseXtoBaseYConversion {
BaseXtoBaseYConversion() {
}
public static String convertBaseXtoBaseY(String inputNumber, final int inputBase, final int outputBase) {
int decimal = baseXToDecimal(inputNumber, inputBase);
return decimalToBaseY(decimal, outputBase);
}
private static int baseXNumeric(char input) {
if (input >= '0' && input <= '9') {
return Integer.parseInt(input + "");
} else if (input >= 'a' && input <= 'z') {
return (input - 'a') + 10;
} else if (input >= 'A' && input <= 'Z') {
return (input - 'A') + 10;
} else {
return Integer.MIN_VALUE;
}
}
public static int baseXToDecimal(String input, final int base) {
if(input.length() <= 0) {
return Integer.MIN_VALUE;
}
int decimalValue = 0;
int placeValue = 0;
for (int index = input.length() - 1; index >= 0; index--) {
decimalValue += baseXNumeric(input.charAt(index)) * (Math.pow(base, placeValue));
placeValue++;
}
return decimalValue;
}
private static char baseYCharacter(int input) {
if (input >= 0 && input <= 9) {
String str = String.valueOf(input);
return str.charAt(0);
} else {
return (char) ('a' + (input - 10));
//return ('A' + (input - 10));
}
}
public static String decimalToBaseY(int input, int base) {
String result = "";
while (input > 0) {
int remainder = input % base;
input = input / base;
result = baseYCharacter(remainder) + result; // Important, Notice the reverse order here
}
return result;
}
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
System.out.println("Enter : number baseX baseY");
while(true) {
String inputNumber = scanner.next();
int inputBase = scanner.nextInt();
int outputBase = scanner.nextInt();
String outputNumber = convertBaseXtoBaseY(inputNumber, inputBase, outputBase);
System.out.println("Result = " + outputNumber);
}
}
}

Resources