i'm converting a program to full free format but don't know how to replace a data structure with fixed positions. and i cant find a good example online also.
i have 2 data structures like below.
i tried
dcl-ds bl dim(12)
bl01 char(7);
bl02 char(7);
...
end-ds
and
dcl-s bl char(7) dim(12);
this is the part that i try to convert
//*************************************************************************
// NORMAL DATA STRUCTURES *
//*************************************************************************
D DS
D BL 1 84
D DIM(12) BARCODE LABEL
D BL01 1 7
D BL02 8 14
D BL03 15 21
D BL04 22 28
D BL05 29 35
D BL06 36 42
D BL07 43 49
D BL08 50 56
D BL09 57 63
D BL10 64 70
D BL11 71 77
D BL12 78 84
D DS
D TL 1 72
D DIM(12) TEXT LABEL
D TL01 1 6
D TL02 7 12
D TL03 13 18
D TL04 19 24
D TL05 25 30
D TL06 31 36
D TL07 37 42
D TL08 43 48
D TL09 49 54
D TL10 55 60
D TL11 61 66
D TL12 67 72
Thanks in advance
If you code the array last, you don't have to hardcode the positions of all the other subfields.
dcl-ds *n;
bl01 char(7);
bl02 char(7);
bl03 char(7);
...
bl char(7) dim(12) pos(1);
end-ds;
You can also use SAMEPOS(bl01) to define the array starting at the same position as BL01. I like coding it this way because it makes the relationship between BL01 and BL clearer. Using SAMEPOS would be the best way to code the subfields if BL01 wasn't the first subfield in the data structure.
dcl-ds *n;
bl01 char(7);
bl02 char(7);
bl03 char(7);
...
bl char(7) dim(12) samepos(bl01);
end-ds;
EDIT: Do not use the first option here. Leaving answer here as is because it is good to note when an option is incorrect.
You have two options here. You can use either pos or overlay. overlay positions relative to another field while pos is an absolution position.
dcl-ds *n;
bl char(7) dim(12);
bl01 char(7) overlay(bl);
bl02 char(7) overlay(bl:*next);
bl03 char(7) overlay(bl:*next);
bl04 char(7) overlay(bl:*next);
bl05 char(7) overlay(bl:*next);
bl06 char(7) overlay(bl:*next);
bl07 char(7) overlay(bl:*next);
bl08 char(7) overlay(bl:*next);
bl09 char(7) overlay(bl:*next);
bl10 char(7) overlay(bl:*next);
bl11 char(7) overlay(bl:*next);
bl12 char(7) overlay(bl:*next);
end-ds;
The other option:
dcl-ds *n;
bl char(7) dim(12) pos(1);
bl01 char(7) pos(1);
bl02 char(7) pos(8);
bl03 char(7) pos(15);
...
end-ds;
Related
I have a kafka topic that has protobuf message of format:
message CreditTransaction {
string date = 1;
float amount = 2;
}
message DebitTransaction {
string date = 1;
float amount = 2;
}
...
.. # other message definitions
message TransactionEvent {
oneof event {
CreditTransaction credit = 1;
DebitTransaction debit = 2;
Trade trade = 3;
....
..# other fields
}
};
Using pyspark-streaming, when I am trying to use ParseFromString method to parse it, its giving me error:
File "./google.zip/google/protobuf/message.py", line 202, in ParseFromString
return self.MergeFromString(serialized)
File "./google.zip/google/protobuf/internal/python_message.py", line 1128, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "./google.zip/google/protobuf/internal/python_message.py", line 1178, in InternalParse
raise message_mod.DecodeError('Field number 0 is illegal.')
google.protobuf.message.DecodeError: Field number 0 is illegal.
Is it because the message TransactionEvent has only a single field and that too oneof type ?
I tried to add a dummy int64 id field also
message TransactionEvent {
int64 id = 1;
oneof event {
CreditTransaction credit = 2;
DebitTransaction debit = 3;
Trade trade = 4;
....
..# other fields
}
};
but still the same error.
Code I am using:
def parse_protobuf_from_bytes(msg_bytes):
msg = schema_pb2.MarketDataEvent()
msg.ParseFromString(msg_bytes)
eventStr = msg.WhichOneof("event")
if eventStr=="credit":
# some code
elif eventStr=="debit":
# some code
return str(concatenatedFieldsValue)
parse_protobuf = udf(lambda x: parse_protobuf_from_bytes(x), StringType())
kafka_conf = {
"kafka.bootstrap.servers": "kafka.broker.com:9092",
"checkpointLocation": "/user/aiman/checkpoint/kafka_local/transactions",
"subscribe": "TRANSACTIONS",
"startingOffsets": "earliest",
"enable.auto.commit": False,
"value.deserializer": "ByteArrayDeserializer",
"group.id": "my-group"
}
df = spark.readStream \
.format("kafka") \
.options(**kafka_conf) \
.load()
data = df.selectExpr("offset","CAST(key AS STRING)", "value") \
.withColumn("event", parse_protobuf(col("value")))
df2 = data.select(col("offset"),col("event"))
If I am just printing the bytes without parsing, I am getting this:
-------------------------------------------
Batch: 0
-------------------------------------------
+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|offset |event |
+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|7777777|bytearray(b'\x00\x00\x00\x00\xc2\x02\x0e\x1a]\n\x11RIOT_230120P35.00\x10\x80\xa6\xae\x82\xd9\xed\xf1\xfe\x16\x18\xcd\xd9\xd9\x82\xd9\xed\xf1\xfe\x16 \xe2\xf7\xd9\x82\xd9\xed\xf1\xfe\x16(\x95\xa2\xed\xff\xd9\xed\xf1\xfe\x160\x8c\xaa\xed\xff\xd9\xed\xf1\xfe\x168\x80\xd1\xb6\xc1\x0b#\xc0\x8d\xa3\xba\x0bH\x19P\x04Z\x02Q_b\x02A_') |
+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The raw data you are trying to decode is:
b'\x00\x00\x00\x00\xc2\x02\x0e\x1a]\n\x11RIOT_230120P35.00\x10\x80\xa6\xae\x82\xd9\xed\xf1\xfe\x16\x18\xcd\xd9\xd9\x82\xd9\xed\xf1\xfe\x16 \xe2\xf7\xd9\x82\xd9\xed\xf1\xfe\x16(\x95\xa2\xed\xff\xd9\xed\xf1\xfe\x160\x8c\xaa\xed\xff\xd9\xed\xf1\xfe\x168\x80\xd1\xb6\xc1\x0b#\xc0\x8d\xa3\xba\x0bH\x19P\x04Z\x02Q_b\x02A_'
A valid protobuf message never starts with 0x00, because the field number 0 is reserved. The message appears to have some extra data in the beginning.
Comparing with the protobuf encoding specification, we can try to make sense of this.
Starting from the string RIOT_230120P35.00, it is correctly prefixed by length 0x11 (17 characters). The previous byte is 0x0A, which is tag for field 1 with type string, such as in CreditTransaction message. Reading the message backwards from there, everything looks reasonable up to 0x1A byte.
After stripping the first 7 bytes and converting to hex (1a 5d 0a 11 52 49 4f 54 5f 32 33 30 31 32 30 50 33 35 2e 30 30 10 80 a6 ae 82 d9 ed f1 fe 16 18 cd d9 d9 82 d9 ed f1 fe 16 20 e2 f7 d9 82 d9 ed f1 fe 16 28 95 a2 ed ff d9 ed f1 fe 16 30 8c aa ed ff d9 ed f1 fe 16 38 80 d1 b6 c1 0b 40 c0 8d a3 ba 0b 48 19 50 04 5a 02 51 5f 62 02 41 5f), the message is accepted by online protobuf decoder.
It seems the message has 7 extra bytes in the beginning for some reason. These bytes do not conform to the protobuf format and their meaning cannot be determined without some information from the developer of the other endpoint of the communication.
Suppose we use avr-gcc to compile code which has the following structure:
typedef struct {
uint8_t bLength;
uint8_t bDescriptorType;
int16_t wString[];
} S_string_descriptor;
We initialize it globally like this:
const S_string_descriptor sn_desc PROGMEM = {
1 + 1 + sizeof L"1234" - 2, 0x03, L"1234"
};
Let's check what is generated from it:
000000ac <__trampolines_end>:
ac: 0a 03 fmul r16, r18
ae: 31 00 .word 0x0031 ; ????
b0: 32 00 .word 0x0032 ; ????
b2: 33 00 .word 0x0033 ; ????
b4: 34 00 .word 0x0034 ; ????
...
So, indeed string content follows the first two elements of the structure, as required.
But if we try to check sizeof sn_desc, result is 2.
Definition of the variable is done in compile-time, sizeof is also a compile-time operator. So, why sizeof var does not show true size of var? And where this behavior of the compiler (i.e., adding arbitrary data to a structure) is documented?
sn_desc is a 2-byte pointer into flash. It is meant to be used with LPM et alia in order to retrieve the actual data. There is no way to get the actual size of this data; store it separately.
Ok, here it is guys. Before you, I have a program that performs this algorithm:
"IF X > 12 THEN X = 2*X+4 ELSE X = X + Y, OUTPUT X."
the problem is, I need it to perform this one instead:
"IF X > 12 THEN X = 2*X+4 ELSE X = X - 13, OUTPUT X."
How would I make this subtract rather than add?
ORG $1000
START: LEA PROMPT, A1
MOVE.B #14, D0 ; display string
TRAP #15
MOVE.B #4, D0 ; read from keyboard
TRAP #15
MOVE D1, D3 ; copy X
LEA STTY, A1
MOVE.B #14, D0 ; display string
TRAP #15
CMP #12, D3 ; X > 12 ?
BGT MULTADD ; branch if yes
CMP #12, D3 ; why compare again??
BRA ADDY
MULTADD
LEA XGT, A1
MOVE.B #14, D0 ; display string
TRAP #15
LEA TWOXP4, A1
MOVE.B #14, D0 ; display string
TRAP #15
MULU #2, D3 ; 2*X
ADD #4, D3 ; +4
MOVE D3, D1 ; copy to D1
MOVE.B #3, D0 ; Display decimal signed D1.L in smallest field
TRAP #15
BRA FIN
ADDY LEA XLT, A1
MOVE.B #14, D0 ; display string
TRAP #15
LEA XPY, A1
MOVE.B #14, D0 ; display string
TRAP #15
ADD Y, D3 ; X = X+Y
MOVE D3, D1
MOVE.B #3, D0 ; Display decimal signed D1.L in smallest field
TRAP #15
BRA FIN ; not needed
FIN MOVE.B #9,D0 ; terminate program
TRAP #15
* Variables and Strings
PROMPT DC.B ';Enter X: ';, 0
STTY DC.B ';Y = 4';, CR, LF, 0
XGT DC.B 'X > 12';, CR, LF, 0
XLT DC.B 'X != 12';, CR, LF, 0
TWOXP4 DC.B 2 * X + 4 = ';, CR, LF, 0
XPY DC.B 'X + Y = ';, 0
Y DC.W 4
CR EQU $0D
LF EQU $0A
END START
Tips:
Use MOVEQ to load a small number into a 32 bit register
Don't use MULU to multiply by 2
Use the .B, .W and .L instructions as default normally just 16 bits
ORG $1000
START: LEA PROMPT, A1
MOVEQ #14, D0 ; display string
TRAP #15
MOVEQ #4, D0 ; read number from keyboard
TRAP #15
MOVE.L D1,D3 ; save X
LEA STTY, A1
MOVEQ #14, D0 ; display string
TRAP #15
CMP.L #12, D3 ; X > 12 ?
BGT MULTADD ; branch if yes
ADDY LEA XLT, A1
MOVEQ #14, D0 ; display string
TRAP #15
LEA XPY, A1
MOVEQ #14, D0 ; display string
TRAP #15
ADD.L Y, D3 ; X = X+Y, change to SUB.L Y,D3
MOVE.L D3, D1
MOVEQ #3, D0 ; Display decimal signed D1.L in smallest field
TRAP #15
BRA FIN ; not needed
MULTADD
LEA XGT, A1
MOVEQ #14, D0 ; display string
TRAP #15
LEA TWOXP4, A1
MOVEQ #14, D0 ; display string
TRAP #15
ASL.L #1, D3 ; 2*X by shifting
ADDQ.L #4, D3 ; +4
MOVE.L D3, D1 ; copy to D1
MOVEQ #3, D0 ; Display decimal signed D1.L in smallest field
TRAP #15
FIN MOVEQ #9,D0 ; terminate program
TRAP #15
* Variables and Strings
CR EQU $0D
LF EQU $0A
PROMPT DC.B ';Enter X: ';, 0
STTY DC.B ';Y = 4';, CR, LF, 0
XGT DC.B 'X > 12';, CR, LF, 0
XLT DC.B 'X != 12';, CR, LF, 0
TWOXP4 DC.B 2 * X + 4 = ';, CR, LF, 0
XPY DC.B 'X + Y = ';, 0
Y DC.L 13
END START
Can't test it, but try to replace the
ADD Y, D3
With
SUB Y, D3
So, here in the following code, I am writing a code to sort numbers in ascending order.
start: nop
MVI B, 09 ; Initialize counter
LXI H, 2200H ;Initialize memory pointer
MVI C, 09H; Initialize counter 2
BACK: MOV A, M ;Get the number
INX H ;Increment memory pointer
CMP M; Compare number with next number
JC SKIP;If less, don't interchange
JZ SKIP; If equal, don't interchang
MOV D, M
MOV M, A
DCX H
MOV M, D
INX H ;Interchange two numbers
DCR C ; Decrement counter 2
JNZ BACK ;If not zero, repeat
DCR B ; Decrement counter 1
JNZ START
HLT ; Terminate program execution
This was that was taught in class.
When I try running the code in GNUSim, I get errors like :
1. Line 9: Undefined symbol.
2. Line 9: Invalid operand or symbol.Check whether operands start with a 0. Like a0H should be 0a0H.
Can somebody help?
In 8085 (js8085) I'd do it the next way (using bubble sort):
#begin 0100
#next 0100
MVI A 00
MVI B 00
MVI C 00
MVI D 00
MVI E 00
MVI H 00
MVI L 00
IN 00
out 00
DCR A
out 06
bubble: in 06
cmp c
jz finished
inr e
ldax b
mov h,a
ldax d
cmp h
jc change;
comprobation: in 00
cmp e
jz semi-fin
call bubble
semi-fin: inr c
mov a,c
mov e,c
call bubble
change: stax b
mov a,h
stax d
call comprobation
finished: hlt
In the port 00 you got the number of elements you have and the the elements themselves are starting from the position 0000 to the number of elements - 1.
At the bottom of Page 264 of CLRS, the authors say after obtaining r0 = 17612864, the 14 most significant bits of r0 yield the hash value h(k) = 67. I do not understand why it gives 67 since 67 in binary is 1000011 which is 7 bits.
EDIT
In the textbook:
As an example, suppose we have k = 123456, p = 14, m = 2^14 = 16384, and w = 32. Adapting Knuth's suggestion, we choose A to be the fraction of the form s/2^32 that is closest to (\sqrt(5) - 1) / 2, so that A = 2654435769/2^32. Then k*s = 327706022297664 = (76300 * 2^32) + 17612864, and so r1 = 76300 and r0 = 17612864. The 14 most significant bits of r0 yield the value h(k)=67.
17612864 = 0x010CC040 =
0000 0001 0000 1100 1100 0000 0100 0000
Most significant 14 bits of that is
0000 0001 0000 11
Which is 0x43, which is 67
Also:
int32 input = 17612864;
int32 output = input >> (32-14); //67
In a 32 bit world
17612864 = 00000001 00001100 11000000 01000000 (binary)
top fourteen bits = 00000001 000011 = 67