xoring negative number doesn't return the same value with ^ and xor() - byte

I don't understand why these doesn't return the same values.
150 ^ -91
-205
but
from pwn import *
xor(150,-91)
b'3'
Of course, chr(-205) is invalid and different to '3'

It looks like the library packs the arguments into unsigned values:
strs = [packing.flat(s, word_size = 8, sign = False, endianness = 'little') for s in args]
The two's complement representation for -91 is 0b10100101 in binary, that's 165 when converted to unsigned integer:
>>> 0b10100101
165
>>> from pwn import *
>>> flat(150, word_size=8, sign=False, endianness='little')
b'\x96'
>>> flat(-91, word_size=8, sign=False, endianness='little')
b'\xa5'
>>> 0x96
150
>>> 0xa5
165
>>> 150 ^ 165
51
>>> chr(51)
'3'

Related

Finding the formula for an alphanumeric code

A script I am making scans a 5-character code and assigns it a number based on the contents of characters within the code. The code is a randomly-generated number/letter combination. For example 7D3B5 or HH42B where any position can be any one of (26 + 10) characters.
Now, the issue I am having is I would like to figure out the number from 1-(36^5) based on the code. For example:
00000 = 0
00001 = 1
00002 = 2
0000A = 10
0000B = 11
0000Z = 36
00010 = 37
00011 = 38
So on and so forth until the final possible code which is:
ZZZZZ = 60466176 (36^5)
What I need to work out is a formula to figure out, let's say G47DU in its number form, using the examples below.
Something like this?
function getCount(s){
if (!isNaN(s))
return Number(s);
return s.charCodeAt(0) - 55;
}
function f(str){
let result = 0;
for (let i=0; i<str.length; i++)
result += Math.pow(36, str.length - i - 1) * getCount(str[i]);
return result;
}
var strs = [
'00000',
'00001',
'00002',
'0000A',
'0000B',
'0000Z',
'00010',
'00011',
'ZZZZZ'
];
for (str of strs)
console.log(str, f(str));
You are trying to create a base 36 numeric system. Since there are 5 'digits' each digit being 0 to Z, the value can go from 0 to 36^5. (If we are comparing this with hexadecimal system, in hexadecimal each 'digit' goes from 0 to F). Now to convert this to decimal, you could try use the same method used to convert from hex or binary etc... system to the decimal system.
It will be something like d4 * (36 ^ 4) + d3 * (36 ^ 3) + d2 * (36 ^ 2) + d1 * (36 ^ 1) + d0 * (36 ^ 0)
Note: Here 36 is the total number of symbols.
d0, d1, d2, d3, d4 can range from 0 to 35 in decimal (Important: Not 0 to 36).
Also, you can extend this for any number of digits or symbols and you can implement operations like addition, subtraction etc in this system itself as well. (It will be fun to implement that. :) ) But it will be easier to convert it to decimal do the operations and convert it back though.

Why am I getting negative integer after adding two positive 16 bit integers?

I am a newbie to golang, actually, I am new to type based programming. I have only knowledge of JS.
While going through simple examples in golang tutorials. I found that adding a1 + a2 provides a negative integer value?
var a1 int16 = 127
var a2 int16 = 32767
var rr int16 = a1 + a2
fmt.Println(rr)
Result:
-32642
Excepted:
The compiler will throw an error as a exceeded the int16 max.
( OR ) GO automatically convert the int16 to int32.
32,894
Can you guys explain why it is showing -32642.
This is the result of Integer Overflow behaving as defined in the specification.
You don't see your expected results, because
Overflow happens at runtime, not compile time.
Go is statically typed.
32,894 is greater than the max value representable by an int16.
It’s very simple.
The 16 bit integer maps the positive part I 0 - 32767 (0x0000, 0x7FFF) and the negative part from 0x8000 (−32768) to 0xFFFF (-1).
For example 0 - 1 = -1 and it’s store as 0xFFFF.
Now in your specific case: 32767 + 127.
You overflow because 32767 is the max value for a signed 16 bit integer, but, if you force the addition 0x7FFF + 7F = 807E and convert 807E to signed 16 bit integer you obtain -32642.
You can better understand here: Signed number representations
Aditionally, check these Math Constants:
const (
MaxInt8 = 1<<7 - 1
MinInt8 = -1 << 7
MaxInt16 = 1<<15 - 1
MinInt16 = -1 << 15
MaxInt32 = 1<<31 - 1
MinInt32 = -1 << 31
MaxInt64 = 1<<63 - 1
MinInt64 = -1 << 63
MaxUint8 = 1<<8 - 1
MaxUint16 = 1<<16 - 1
MaxUint32 = 1<<32 - 1
MaxUint64 = 1<<64 - 1
)
And check the human version of these values here

How do I make this program work for input >10 for the USACO Training Pages Square Palindromes?

Problem Statement -
Given a number base B (2 <= B <= 20 base 10), print all the integers N (1 <= N <= 300 base 10) such that the square of N is palindromic when expressed in base B; also print the value of that palindromic square. Use the letters 'A', 'B', and so on to represent the digits 10, 11, and so on.
Print both the number and its square in base B.
INPUT FORMAT
A single line with B, the base (specified in base 10).
SAMPLE INPUT
10
OUTPUT FORMAT
Lines with two integers represented in base B. The first integer is the number whose square is palindromic; the second integer is the square itself. NOTE WELL THAT BOTH INTEGERS ARE IN BASE B!
SAMPLE OUTPUT
1 1
2 4
3 9
11 121
22 484
26 676
101 10201
111 12321
121 14641
202 40804
212 44944
264 69696
My code works for all inputs <=10, however, gives me some weird output for inputs >10.
My Code-
#include<iostream>
#include<cstdio>
#include<cmath>
using namespace std;
int baseToBase(int num, int base) //accepts a number in base 10 and the base to be converted into as arguments
{
int result=0, temp=0, i=1;
while(num>0)
{
result = result + (num%base)*pow(10, i);
i++;
num = num/base;
}
result/=10;
return result;
}
long long int isPalin(int n, int base) //checks the palindrome
{
long long int result=0, temp, num=n*n, x=n*n;
num = baseToBase(num, base);
x = baseToBase(x, base);
while(num)
{
temp=num%10;
result = result*10 + temp;
num/=10;
}
if(x==result)
return x;
else
return 0;
}
int main()
{
int base, i, temp;
long long int sq;
cin >> base;
for(i=1; i<=300; i++)
{
temp=baseToBase(i, base);
sq=isPalin(i, base);
if(sq!=0)
cout << temp << " " << sq << endl;
}
return 0;
}
For input = 11, the answer should be
1 1
2 4
3 9
6 33
11 121
22 484
24 565
66 3993
77 5335
101 10201
111 12321
121 14641
202 40804
212 44944
234 53535
While my answer is
1 1
2 4
3 9
6 33
11 121
22 484
24 565
66 3993
77 5335
110 10901
101 10201
111 12321
121 14641
209 40304
202 40804
212 44944
227 50205
234 53535
There is a difference in my output and the required one as 202 shows under 209 and 110 shows up before 101.
Help appreciated, thanks!
a simple example for B = 11 to show error in your base conversion is for i = 10 temp should be A but your code calculates temp = 10. Cause in we have only 10 symbols 0-9 to perfectly show every number in base 10 or lower but for bases greater than that you have to use other symbols to represent a different digit like 'A', 'B' and so on. problem description clearly states that. Hope You will be able to fix your code now by modifying your int baseToBase(int num, int base)function.

CRC Polynomial Division

I am trying to use polynomial division to find the CRC check bits, but I am struggling with the last stage of the calculation.
I am believe the below conversions are correct:
Pattern = 1010
= x^3 + x
Dataword = 9 8 7
= 1001 1000 0111
= x^11 + x^8 + x^7 + x^2 + x + 1
And finally the polynomial long division I am attempted is:
x^8 + x^6 + x^5 + x^3 + x
_______________________________________
x^3 + x | x^11 + x^8 + x^7 + x^2 + x + 1
x^11 + x^9
....
x^4 + x^2 + x + 1
x^4 + x^2
= x + 1
My question is, is the remainder / answer x + 1 or do I take it a step further and remove the x leaving the remainder as just 1?
Thank you for your help!
We can check by mod 2 division (XOR) too, the following code shows a python implementation of CRC checking, we need to follow the steps listed below:
Convert CRC / data polynomials to corresponding binary equivalents.
if the CRC key (binary representation obtained from the polynomial) has k bits, we need to pad an additional k-1 bits with the data to check for errors. In the example given, the bits 011 should be appended to the data, not 0011, since k=4.
At the transmitter end,
The binary data is to be augmented first by adding k-1 zeros in the end of the data.
Use modulo-2 binary division to divide binary data by the CRC key and store remainder of division.
Append the remainder at the end of the data to form the encoded data and send the same
At the receiver end (Check if there are errors introduced in transmission)
Perform modulo-2 division again on the sent data with the CRC key and if the remainder is 0, then there are no errors.
Now let's implement the above:
def CRC_polynomial_to_bin_code(pol):
return bin(eval(pol.replace('^', '**').replace('x','2')))[2:]
def get_remainder(data_bin, gen_bin):
ng = len(gen_bin)
data_bin += '0'*(ng-1)
nd = len(data_bin)
divisor = gen_bin
i = 0
remainder = ''
print('\nmod 2 division steps:')
print('divisor dividend remainder')
while i < nd:
j = i + ng - len(remainder)
if j > nd:
remainder += data_bin[i:]
break
dividend = remainder + data_bin[i:j]
remainder = ''.join(['1' if dividend[k] != gen_bin[k] else '0' for k in range(ng)])
print('{:8s} {:8s} {:8s}'.format(divisor, dividend, remainder[1:]))
remainder = remainder.lstrip('0')
i = j
return remainder.zfill(ng-1)
gen_bin = CRC_polynomial_to_bin_code('x^3+x')
data_bin = CRC_polynomial_to_bin_code('x^11 + x^8 + x^7 + x^2 + x + 1')
print('transmitter end:\n\nCRC key: {}, data: {}'.format(gen_bin, data_bin))
r = get_remainder(data_bin, gen_bin)
data_crc = data_bin + r
print('\nencoded data: {}'.format(data_crc))
print('\nreceiver end:')
r = get_remainder(data_crc, gen_bin)
print('\nremainder {}'.format(r))
if eval(r) == 0:
print('data received at the receiver end has no errors')
# ---------------------------------
# transmitter end:
#
# CRC key: 1010, data: 100110000111
#
# mod 2 division steps:
# divisor dividend remainder
# 1010 1001 011
# 1010 1110 100
# 1010 1000 010
# 1010 1000 010
# 1010 1011 001
# 1010 1100 110
# 1010 1100 110
#
# encoded data: 100110000111110
# ---------------------------------
# receiver end:
#
# mod 2 division steps:
# divisor dividend remainder
# 1010 1001 011
# 1010 1110 100
# 1010 1000 010
# 1010 1000 010
# 1010 1011 001
# 1010 1111 101
# 1010 1010 000
#
# remainder 000
# data received at the receiver end has no errors
# ---------------------------------

What is a good way to iterate a number through all the possible values of a mask?

Given a bitmask where the set bits describe where another number can be one or zero and the unset bits must be zero in that number. What's a good way to iterate through all its possible values?
For example:
000 returns [000]
001 returns [000, 001]
010 returns [000, 010]
011 returns [000, 001, 010, 011]
100 returns [000, 100]
101 returns [000, 001, 100, 101]
110 returns [000, 010, 100, 110]
111 returns [000, 001, 010, 011, 100, 101, 110, 111]
The simplest way to do it would be to do it like this:
void f (int m) {
int i;
for (i = 0; i <= m; i++) {
if (i == i & m)
printf("%d\n", i);
}
}
But this iterates through too many numbers. It should be on the order of 32 not 2**32.
There's a bit-twiddling trick for this (it's described in detail in Knuth's "The Art of Computer Programming" volume 4A §7.1.3; see p.150):
Given a mask mask and the current combination bits, you can generate the next combination with
bits = (bits - mask) & mask
...start at 0 and keep going until you get back to 0. (Use an unsigned integer type for portability; this won't work properly with signed integers on non-two's-complement machines. An unsigned integer is a better choice for a value being treated as a set of bits anyway.)
Example in C:
#include <stdio.h>
static void test(unsigned int mask)
{
unsigned int bits = 0;
printf("Testing %u:", mask);
do {
printf(" %u", bits);
bits = (bits - mask) & mask;
} while (bits != 0);
printf("\n");
}
int main(void)
{
unsigned int n;
for (n = 0; n < 8; n++)
test(n);
return 0;
}
which gives:
Testing 0: 0
Testing 1: 0 1
Testing 2: 0 2
Testing 3: 0 1 2 3
Testing 4: 0 4
Testing 5: 0 1 4 5
Testing 6: 0 2 4 6
Testing 7: 0 1 2 3 4 5 6 7
(...and I agree that the answer for 000 should be [000]!)
First of all, it's unclear why 000 wouldn't return [000]. Is that a mistake?
Otherwise, given a mask value "m" and number "n" which meets the criterion (n & ~m)==0, I would suggest writing a formula to compute the next higher number. One such formula uses the operators "and", "or", "not", and "+", once each.
The trick by #Matthew is amazing. Here is a less tricky, but unfortunately also a less efficient, recursive version in Python:
def f(mask):
if mask == "0":
return ['0']
elif mask == '1':
return ['0', '1']
else:
bits1 = f(mask[1:])
bits2 = []
for b in bits1:
bits2.append('0' + b)
if mask[0] == '1':
bits2.append('1' + b)
return bits2
print f("101") ===> ['000', '100', '001', '101']
You can do it brute-force. ;-) Ruby example:
require 'set'
set = Set.new
(0..n).each do |x|
set << (x & n)
end
(where set is a set datatype, i.e., removes duplicates.)
Try this code:
def f (máscara):
se máscara == "0":
voltar ['0 ']
elif máscara == '1 ':
voltar ['0 ', '1']
else:
bits1 = f (máscara [1:])
bits2 = []
para b em bits1:
bits2.append ('0 '+ b)
se máscara [0] == '1 ':
bits2.append ('1 '+ b)
voltar bits2
print f ("101") ===> ['000 ', '100', '001 ', '101']
é interessante .

Resources