BCD to ASCII checksum - ascii

I have a very old device that I am connecting to through serial. When I am sending data it wants a checksum to be calculated with it. I add up all of the ascii valuesof the characters of the string and convert the sum to BCD. This results in illegal BCD characters such as 1011. In the only example that is provided they convert 1011 to ";". When I sent the data in the example the checksum clears fine. But when I use ";" for other illegal characters it fails. Has anyone seen the use of ";" before and if so does anyone have any idea what the values for the other illegal characters are?
edit : The Example I have:
STX 000 0010
1 011 0001
2 011 0010
3 011 0011
CR 000 1101
A 100 0001
B 100 0010
C 100 0011
CR 000 1101
EXT 000 0011
Total 10111 1011
Convert To BCD 1 0111 1011
Checksum 1 7 ;

Looks like they're using the next six ASCII characters:
DEC HEX1 HEX2 BIN1 BIN2 CHAR
48 3 0 0011 0000 0
49 3 1 0011 0001 1
50 3 2 0011 0010 2
51 3 3 0011 0011 3
52 3 4 0011 0100 4
53 3 5 0011 0101 5
54 3 6 0011 0110 6
55 3 7 0011 0111 7
56 3 8 0011 1000 8
57 3 9 0011 1001 9
58 3 A 0011 1010 :
59 3 B 0011 1011 ;
60 3 C 0011 1100 <
61 3 D 0011 1101 =
62 3 E 0011 1110 >
63 3 F 0011 1111 ?

Related

Xcode 11 console spew

I've noticed some strange spew in the console after updating to Xcode 11.
Has anyone else seen this, or know what the issue night be.
0000000A: 0100 4 4 319
00000016: 0101 4 4 398
00000022: 0102 3 6 110
0000002E: 011A 5 8 116
0000003A: 011B 5 8 124
00000046: 0128 3 2 3
00000052: 0131 2 13 132
0000005E: 0132 2 20 146
000000A8: 0100 4 4 205
000000B4: 0101 4 4 256
000000C0: 0102 3 6 268
000000CC: 0103 3 2 6
000000D8: 0106 3 2 6
000000E4: 0115 3 2 3
000000F0: 0201 4 4 274
000000FC: 0202 4 4 7301
etc

How do you copy a machine instruction to a register by just using shift, or, ori?

I've been trying to learn assembly language (MIPS32) on my own, and I've been following this free online curriculum that teaches it.
There's an exercise that asks me to copy ori $8, $6, 0x20 into $9 by using only or, ori, and shift. Unfortunately, an answer isn't provided, and I have no idea how to do this. Can somebody help me or point me in the right direction? Thank you.
First you have to inspect the format used for the ori instruction:
0011 01ss ssst tttt iiii iiii iiii iiii
Source: MIPS Instruction Reference
sssss the destination register which is $8 = 01000
ttttt the source register which is $6 = 00110
ii... the immediate operand which is 0x20 = ...10 0000
The resulting instruction looks as follows:
0011 01ss ssst tttt iiii iiii iiii iiii
0011 0101 0000 0110 0000 0000 0010 0000
Which we convert to hexadecimal for use in our code: 0x35060020
Since the ori instruction accepts 16 bits for an immediate operand we can combine it with a simple left-shift to populate the higher 16 bits first with 0x3506 and then add the lower 16 bits with another ori instruction.
ori $9, $0, 0x3506 # insert upper 16 bits of instruction
# 0000 0000 0000 0000 0011 0101 0000 0110
sll $9, $9, 0x10 # shift 16 bits to higher part of register
# 0011 0101 0000 0110 0000 0000 0000 0000
ori $9, $9, 0x0020 # insert lower 16 bits of instruction
# 0011 0101 0000 0110 0000 0000 0010 0000

BaseOfCode present in PE+ executable

The MS documentation says that the BaseOfCode value is only present in PE files, not in PE+. Looking at notepad.exe with dotPeek and with PE Viewer seems to indicate that the BaseOfCode is present and consumed.
0 1 2 3 4 5 6 7 8 9 A B C D E F
0x00E0 | 5045 0000 6486 0600 6a98 8957 0000 0000
0x00F0 | 0000 0000 f000 2200 0b02 0e00 0086 0100
0x0100 | 004e 0200 0000 0000 d087 0100 0010 0000
The two bytes at 0x00F8 signify that this is a PE+ header. The BaseOfCode is the four bytes at 0x010C.
Is the documentation (and myself) incorrect or are dotPeek and PE View
incorrect?
The fact that these bytes aren't zeroed out would imply that it the bytes are significant in some way.

How to run LuaJIT bytecode generated from `luajit -bl`?

I have a LuaJIT function in binary bytecode format (i.e. I can run it with luajit)
I ran luajit -bl on it to convert it to text bytecode, and I rewrote some of the bytecode.
How can I run the modified text bytecode?
For example, here's an excerpt from my text bytecode
-- BYTECODE -- innerfunction:0-0
0001 IST 1
0002 JMP 2 => 0004
0003 KSHORT 1 0
0004 => GGET 2 0 ; "pairs"
0005 MOV 3 0
0006 CALL 2 4 2
0007 ISNEXT 5 => 0080
0008 => GGET 7 1 ; "type"
0009 MOV 8 5
0010 CALL 7 2 2
0011 ISNES 7 2 ; "string"
0012 JMP 7 => 0019
0013 GGET 7 2 ; "string"
0014 TGETS 7 7 3 ; "format"
0015 KSTR 8 4 ; "%q"

IEEE Float input to BCD convertion

If i use one std_logic_vector (31 downto 0) as input of my entity.
Exists any form of using this 32 bits (IEEE Format) to convert them to ASCII form ?
I have 3.14:
input ----> 0100 0000 0100 1000 1111 0101 1100 0011 (in IEEE 32 bits form)
output <---- 0011 0011 0010 1110 0011 0001 0011 0100
3 3 2 E 3 1 3 4
\__/ \__/ \__/ \__/
3 . 1 4 (in ASCII form)
The number 3.14 is only a example. May be is any number in 32 bits used as input
of my entity.

Resources