Lua - Display field ASCII Dissector - filter

I am currently working on my first ever Protocol Dissector. I am facing a problem that I can't solve. Basically I have a field which is 8 bytes long (but is defined over 9 bytes), so I created a bitfield to define this protofield.
Here are the deffinitions of the field I have tested so far:
a) local harer_id = ProtoField.string ("myProto.harer_id","Harer ID", base.ASCII)
b) local harer_id = ProtoField.uint64 ("myProto.harer_id", "Harer ID", base.HEX )
Then I add it to the dissection Tree on the following way:
local harer_id_long = tvbuf:range(16,9)
body:add(harer_id, harer_id_long:bitfield(4,64))
Which ends up giving the following errors:
a) Gives no error but it doesnt return the value on ASCII format
What I get: 0x0000000000313030
What I want: 0x0000000000313030 (100)
b) calling 'add' on bad self (string expected, got userdata)
If any of you have any suggestions I would appreciate your help.
Thank you in advance,
Martin
EDIT 1:
I wrote this code which will get the ASCII table values from each byte on the field's value:
I don't know how to make it work so that it displays the ASCII value on the packet view.
function getASCII (str)
resultStr = ""
asciiValue=""
for i = 3, string.len(tostring(str))-1, 2 do
asciiValue = string.char(tonumber(tostring(string.sub(tostring(str),i,i+1)), 16))
if asciiValue~=nil then
resultStr = resultStr .. tostring(tonumber(asciiValue))
end
end
resultStr = string.gsub(resultStr, "nil", "")
return resultStr
end

Here is an alternate method that also works for me. I'm not sure which you prefer, but you now have 2 to choose from (assuming you can get my original method to work):
local harer_id = ProtoField.uint64("myProto.harer_id", "Harer ID", base.HEX)
harer_item = body:add(harer_id, tvbuf(16, 9):bitfield(4, 64))
harer_item:set_len(9)
vals = {}
for i = 0, 7 do
vals[i] = bit.bor(buf(16 + i, 2):bitfield(4, 8), 0x30)
end
harer_item:append_text(" (" ..
tonumber(string.format("%c%c%c%c%c%c%c%c", vals[0], vals[1], vals[2], vals[3], vals[4], vals[5], vals[6], vals[7])) ..
")")
EDIT: Here is a simple Lua dissector and sample packet you can use to test this solution:
-- Protocol
local p_foo = Proto("foo", "FOO Protocol")
-- Fields
local f_foo_res1 = ProtoField.uint8("foo.res1", "Reserved 1", base.DEC, nil, 0xf0)
local f_foo_str = ProtoField.uint64("foo.str", "String", base.HEX)
local f_foo_res2 = ProtoField.uint8("foo.res2", "Reserved 2 ", base.DEC, nil, 0x0f)
local f_foo_res3 = ProtoField.uint8("foo.res3", "Reserved 3", base.HEX)
local f_foo_ipv6 = ProtoField.ipv6("foo.ipv6", "IPv6 Address")
p_foo.fields = { f_foo_res1, f_foo_str, f_foo_res2, f_foo_res3, f_foo_ipv6 }
-- Dissection
function p_foo.dissector(buf, pinfo, tree)
local foo_tree = tree:add(p_foo, buf(0,-1))
pinfo.cols.protocol:set("FOO")
foo_tree:add(f_foo_res1, buf(0, 1))
str_item = foo_tree:add(f_foo_str, buf(0, 9):bitfield(4, 64))
str_item:set_len(9)
vals = {}
for i = 0, 7 do
vals[i] = bit.bor(buf(i, 2):bitfield(4, 8), 0x30)
end
str_item:append_text(" (" ..
tonumber(string.format("%c%c%c%c%c%c%c%c", vals[0], vals[1], vals[2], vals[3], vals[4], vals[5], vals[6], vals[7])) ..
")")
foo_tree:add(f_foo_res2, buf(9, 1))
foo_tree:add(f_foo_res3, buf(10, 1))
foo_tree:add(f_foo_ipv6, buf(11, 16))
end
-- Registration
local udp_table = DissectorTable.get("udp.port")
udp_table:add(33333, p_foo)
Use text2pcap to convert this data into a packet that Wireshark can read or use Wireshark's "File -> Import From Hex Dump..." feature:
0000 00 0e b6 00 00 02 00 0e b6 00 00 01 08 00 45 00
0010 00 37 00 00 40 00 40 11 b5 ea c0 00 02 65 c0 00
0020 02 66 82 35 82 35 00 23 00 00 03 03 13 23 33 43
0030 53 63 70 80 64 20 01 0d b8 00 00 00 00 00 00 00
0040 00 00 00 00 01
My Wireshark details:
Compiled (64-bit) with Qt 5.6.2, with WinPcap (4_1_3), with GLib 2.42.0, with
zlib 1.2.8, with SMI 0.4.8, with c-ares 1.12.0, with Lua 5.2.4, with GnuTLS
3.4.11, with Gcrypt 1.7.6, with MIT Kerberos, with GeoIP, with nghttp2 1.14.0,
with LZ4, with Snappy, with libxml2 2.9.4, with QtMultimedia, with AirPcap, with
SBC, with SpanDSP.

There's probably a more efficient way to do this, but you could try something like this?
harer_id = ProtoField.string("myProto.harer_id", "Harer ID", base.ASCII)
harer_item = body:add(harer_id, tvbuf(16, 9))
harer_item:set_text("Harer ID: " ..
tonumber(
string.char(bit.bor(bit.band(bit.lshift(tvbuf(16, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(17, 1):uint(), 4))) ..
string.char(bit.bor(bit.band(bit.lshift(tvbuf(17, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(18, 1):uint(), 4))) ..
string.char(bit.bor(bit.band(bit.lshift(tvbuf(18, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(19, 1):uint(), 4))) ..
string.char(bit.bor(bit.band(bit.lshift(tvbuf(19, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(20, 1):uint(), 4))) ..
string.char(bit.bor(bit.band(bit.lshift(tvbuf(20, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(21, 1):uint(), 4))) ..
string.char(bit.bor(bit.band(bit.lshift(tvbuf(21, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(22, 1):uint(), 4))) ..
string.char(bit.bor(bit.band(bit.lshift(tvbuf(22, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(23, 1):uint(), 4))) ..
string.char(bit.bor(bit.band(bit.lshift(tvbuf(23, 1):uint(), 4), 0xf0), bit.rshift(tvbuf(24, 1):uint(), 4)))
)
)

Related

MD5 implementation in Ruby

I am trying to implement MD5 in Ruby, following the pseudo code written in the wiki.
Here is the codes, not working well:
# : All variables are unsigned 32 bit and wrap modulo 2^32 when calculating
# s specifies the per-round shift amounts
s = []
s.concat([7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22])
s.concat([5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20])
s.concat([4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23])
s.concat([6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21])
# Use binary integer part of the sines of integers (Radians) as constants:
k = 0.upto(63).map do |i|
(Math.sin(i+1).abs * 2 ** 32).floor
end
# Initialize variables:
a0 = 0x67452301 # A
b0 = 0xefcdab89 # B
c0 = 0x98badcfe # C
d0 = 0x10325476 # D
message = File.read(ARGV[0])
# Pre-processing
# with bit stream "string" (MSB)
bits = message.unpack('B*')[0]
org_len = bits.size
bits << '1' # adding a single 1 bit
bits << '0' while !((bits.size + 64) % 512 == 0) # padding with zeros
bits << (org_len % 2 ** 64).to_s(2).rjust(64, '0')
message32 = [bits].pack('B*').unpack('V*') # ?
# 1. bits.scan(/(.{8})/).flatten.map { |b| b.to_i(2).to_s(16).rjust(2, '0') }.each_slice(16) { |c| puts c.join(' ')} => for test
# 2. [bits].pack('B*').unpack('N*') == bits.scan(/(.{32})/).flatten.map { |b| b.to_i(2).to_s(16).rjust(8, '0').to_i(16) } => true
# custom operations for wrapping the results as modulo 2 ** 32
class Integer
def r
self & 0xFFFFFFFF
end
def rotate_l(count)
(self << count).r | (self >> (32 - count))
end
end
# Process the message in successive 512-bit chunks:
message32.each_slice(16).each do |m|
a = a0
b = b0
c = c0
d = d0
0.upto(63) do |i|
if i < 16
f = d ^ (b & (c ^ d))
g = i
elsif i < 32
f = c ^ (d & (b ^ c))
g = (5 * i + 1) % 16
elsif i < 48
f = b ^ c ^ d
g = (3 * i + 5) % 16
elsif i < 64
f = c ^ (b | ~d)
g = (7 * i) % 16
end
f = (f + a + k[i] + m[g]).r
a = d
d = c
c = b
b = (b + f.rotate_l(s[i])).r
end
a0 = (a0 + a).r
b0 = (b0 + b).r
c0 = (c0 + c).r
d0 = (d0 + d).r
end
puts [a0, b0, c0, d0].pack('V*').unpack('H*')
I'm testing with messages, well known for collision in just one block:
Message 1
Message 2
They are resulted in the same value, but not correct:
❯ ruby md5.rb message1.bin
816922b82e2f8d5bd3abf90777ad72c9
❯ ruby md5.rb message2.bin
816922b82e2f8d5bd3abf90777ad72c9
❯ md5 message*
MD5 (/Users/hansuk/Downloads/message1.bin) = 008ee33a9d58b51cfeb425b0959121c9
MD5 (/Users/hansuk/Downloads/message2.bin) = 008ee33a9d58b51cfeb425b0959121c9
I have an uncertainty about pre-processing steps.
I checked the bit stream after pre-processing, with the comments on the line 34 and 35, the original message written in same and the padding bits are right:
❯ hexdump message1.bin
0000000 4d c9 68 ff 0e e3 5c 20 95 72 d4 77 7b 72 15 87
0000010 d3 6f a7 b2 1b dc 56 b7 4a 3d c0 78 3e 7b 95 18
0000020 af bf a2 00 a8 28 4b f3 6e 8e 4b 55 b3 5f 42 75
0000030 93 d8 49 67 6d a0 d1 55 5d 83 60 fb 5f 07 fe a2
0000040
(byebug) bits.scan(/(.{8})/).flatten.map { |b| b.to_i(2).to_s(16).rjust(2, '0') }.each_slice(16) { |c| puts c.join(' ')}
4d c9 68 ff 0e e3 5c 20 95 72 d4 77 7b 72 15 87
d3 6f a7 b2 1b dc 56 b7 4a 3d c0 78 3e 7b 95 18
af bf a2 00 a8 28 4b f3 6e 8e 4b 55 b3 5f 42 75
93 d8 49 67 6d a0 d1 55 5d 83 60 fb 5f 07 fe a2
80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 00
nil
What am I missed?
The one most classical mistake in implementing MD5 is botching endianness: the padded message and the message length are to be turned to 32-bit words per little-endian convention, so that message 'abc' in ASCII (0x61 0x62 0x63) is turned to a 16-word padded message block with m[0]=0x80636261, m[14]=0x18, and m[1…13,15]=0.
I never wrote anything in Ruby, but I get the feeling the code yields m[0]=0x61626380, m[15]=0x18, and m[1…14]=0.
Also: the 4-word result is to be turned to 16 bytes per little-endian convention too.

Reading sk_buff with ebpf inside dev_queue_xmit yields questionable data

I'm trying to capture outgoing ethernet frames on the local host before they are sent by inserting a kprobe into __dev_queue_xmit().
However, the bytes I extract from the sk_buff structure do not match the subsequently captured packets.
I only attempted it for linear skbs up to now, because I already get unexpected results there.
For example, my kprobe reported the following information during a call to __dev_queue_xmit():
COMM PID TGID LEN DATALEN
chronyd 1058 1058 90 0
3431c4b06a8b3c7c3f2023bd08006500d0a57f040f7f0000000000000000000000000000000000006018d11a0f7f00000100000000000000000000000000000060a67f040f7f0000000000000000000000000000000000004001
COMM is the name of the process which called the function,
PID is the calling thread's id and TGID its thread group id. LEN is the value of (skb->len - skb->data_len) and DATA_LEN is skb->data_len.
Next, the program has copied LEN (in this case 90) bytes starting at skb->data.
Since DATALEN is zero, this is a linear skb. Thus, those bytes should contain exactly the frame which is about to be sent, shouldn't they?
Well, Wireshark subsequently recorded this frame:
0000 34 31 c4 b0 6a 8b 3c 7c 3f 20 23 bd 08 00 45 00
0010 00 4c 83 93 40 00 40 11 d1 a2 c0 a8 b2 18 c0 a8
0020 b2 01 c8 07 00 7b 00 38 e5 b4 23 00 06 20 00 00
0030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0050 00 00 38 bc 17 13 12 4a 4c c0
The first 14 bytes, which are forming the ethernet header, match up perfectly as expected. Everything else doesn't match up at all.
The question now is: Why do the bytes not match up?
(Yes, I am certain the frame from Wireshark is indeed the one caused by this call to __dev_queue_xmit(). This is because only background programs using the network were running at the time, so the amount of outgoing traffic was rather small. Additionally, the captured frame contains, as expected, 90 bytes. Also, this frame holds an NTP payload, which is just what you'd expect from chronyd.)
My kernel version is 5.12.6-200.fc33.x86_64.
If you want to try it out yourself or have a closer look at my program, here it is:
from bcc import BPF
from ctypes import cast, POINTER, c_char
prog = """
#include <linux/sched.h>
#include <linux/skbuff.h>
struct xmit_event {
u64 ts;
u32 pid;
u32 tgid;
u32 len;
u32 datalen;
u32 packet_buf_ptr;
char comm[TASK_COMM_LEN];
u64 head;
u64 data;
u64 tail;
u64 end;
};
BPF_PERF_OUTPUT(xmits);
#define PACKET_BUF_SIZE 32768
# define PACKET_BUFS_PER_CPU 15
struct packet_buf {
char data[PACKET_BUF_SIZE];
};
BPF_PERCPU_ARRAY(packet_buf, struct packet_buf, PACKET_BUFS_PER_CPU);
BPF_PERCPU_ARRAY(packet_buf_head, u32, 1);
int kprobe____dev_queue_xmit(struct pt_regs *ctx, struct sk_buff *skb, void *accel_priv) {
if (skb == NULL || skb->data == NULL)
return 0;
struct xmit_event data = { };
u64 both = bpf_get_current_pid_tgid();
data.pid = both;
if (data.pid == 0)
return 0;
data.tgid = both >> 32;
data.ts = bpf_ktime_get_ns();
bpf_get_current_comm(&data.comm, sizeof(data.comm));
data.len = skb->len;
// Copy packet contents
int slot = 0;
u32 *packet_buf_ptr = packet_buf_head.lookup(&slot);
if (packet_buf_ptr == NULL)
return 0;
u32 buf_head = *packet_buf_ptr;
u32 next_buf_head = (buf_head + 1) % PACKET_BUFS_PER_CPU;
packet_buf_head.update(&slot, &next_buf_head);
struct packet_buf *ringbuf = packet_buf.lookup(&buf_head);
if (ringbuf == NULL)
return 0;
u32 skb_data_len = skb->data_len;
u32 headlen = data.len - skb_data_len;
headlen &= 0xffffff; // Useless, but validator demands it because "this unsigned(!) variable could otherwise be negative"
bpf_probe_read_kernel(ringbuf->data, headlen < PACKET_BUF_SIZE ? headlen : PACKET_BUF_SIZE, skb->data);
data.packet_buf_ptr = buf_head;
data.len = headlen;
data.datalen = skb_data_len;
data.head = (u64) skb->head;
data.data = (u64) skb->data;
data.tail = (u64) skb->tail;
data.end = (u64) skb->end;
xmits.perf_submit(ctx, &data, sizeof(data));
return 0;
}
"""
global b
def xmit_received(cpu, data, size):
global b
global py_packet_buf
ev = b["xmits"].event(data)
print("%-18d %-25s %-8d %-8d %-10d %-10d %-12d %-12d %-12d %-12d" % (ev.ts, ev.comm.decode(), ev.pid, ev.tgid, ev.len, ev.datalen, ev.head, ev.data, ev.tail, ev.end))
bs = cast(py_packet_buf[ev.packet_buf_ptr][cpu].data, POINTER(c_char))[:ev.len]
c = bytes(bs)
print(c.hex())
def observe_kernel():
# load BPF program
global b
b = BPF(text=prog)
print("%-18s %-25s %-8s %-8s %-10s %-10s %-12s %-12s %-12s %-12s" % ("TS", "COMM", "PID", "TGID", "LEN", "DATALEN", "HEAD", "DATA", "TAIL", "END"))
b["xmits"].open_perf_buffer(xmit_received)
global py_packet_buf
py_packet_buf = b["packet_buf"]
try:
while True:
b.perf_buffer_poll()
except KeyboardInterrupt:
print("Kernel observer thread stopped.")
observe_kernel()
Found the issue.
I needed to replace
struct packet_buf {
char data[PACKET_BUF_SIZE];
};
with
struct packet_buf {
unsigned char data[PACKET_BUF_SIZE];
};
I, however, do not understand how signedness makes a difference when I am not performing comparisons or arithmetic operations with this data.

How to generate a deterministic set of UUIDs in golang

I'm doing some testing and it would be useful to have a known set of UUIDs that are getting used by our code. However, I'm having trouble figuring out how to create a deterministic set of UUIDs in golang.
I've tried a few approaches, but neither seemed to work:
type KnownReader struct {
store *Store
}
type Store struct {
val uint16
}
func (r KnownReader) Read(p []byte) (n int, err error) {
ret := r.store.val
r.store.val = ret + 1
fmt.Printf("\nStore: %v", r.store.val)
p = make([]byte, 4)
binary.LittleEndian.PutUint16(p, uint16(ret))
fmt.Printf("\nreader p: % x", p)
return binary.MaxVarintLen16, nil
}
func main() {
r := KnownReader{
store: &Store{val: 111},
}
uuid.SetRand(r)
u, _ := uuid.NewRandomFromReader(r)
fmt.Printf("\n%v",u)
u, _ = uuid.NewRandomFromReader(r)
fmt.Printf("\n%v",u)
}
---- OUTPUT ----
Store: 1
reader p: 00 00 00 00
Store: 2
reader p: 01 00 00 00
Store: 3
reader p: 02 00 00 00
Store: 4
reader p: 03 00 00 00
Store: 5
reader p: 04 00 00 00
Store: 6
reader p: 05 00 00 00
00000000-0000-4000-8000-000000000000
Store: 7
reader p: 06 00 00 00
Store: 8
reader p: 07 00 00 00
Store: 9
reader p: 08 00 00 00
Store: 10
reader p: 09 00 00 00
Store: 11
reader p: 0a 00 00 00
Store: 12
reader p: 0b 00 00 00
00000000-0000-4000-8000-000000000000
As you can see, the UUID, does not change between calls
I also tried using uuid.FromBytes, but that didn't seem to work either:
func getbytes(num uint16) []byte {
p := make([]byte, 4)
binary.LittleEndian.PutUint16(p, num)
fmt.Printf("\ngetbytes p: % x", p)
return p
}
func main() {
var i uint16 = 0
fmt.Printf("\nout getbytes: % x", getbytes(i))
u, _ := uuid.FromBytes(getbytes(i))
i = i + 1
fmt.Printf("\nUUID: %v", u)
fmt.Printf("\nout getbytes: % x", getbytes(i))
u, _ = uuid.FromBytes(getbytes(i))
fmt.Printf("\nUUID: %v", u)
}
---- OUTPUT ----
getbytes p: 00 00 00 00
out getbytes: 00 00 00 00
getbytes p: 00 00 00 00
UUID: 00000000-0000-0000-0000-000000000000
getbytes p: 01 00 00 00
out getbytes: 01 00 00 00
getbytes p: 01 00 00 00
UUID: 00000000-0000-0000-0000-000000000000
As you can see the UUIDs are still the same here as well.
So, is there something I'm missing? How can I get a consistent set of UUIDs?
Thanks
Thanks Adrian, I think I figured out the answer:
rnd := rand.New(rand.NewSource(1))
uuid.SetRand(rnd)
u, _ = uuid.NewRandomFromReader(rnd)
fmt.Printf("\n%v", u)
u, _ = uuid.NewRandomFromReader(rnd)
fmt.Printf("\n%v", u)
--- OUTPUT ---
52fdfc07-2182-454f-963f-5f0f9a621d72
9566c74d-1003-4c4d-bbbb-0407d1e2c649

how to supply a specific timezone to TzSpecificLocalTimeToSystemTime()

The following dBase code invokes a win32 API function to convert a local DST time to a system time. The first parameter set to "null" means that the function takes the current active time zone. What value do I have to put instead of "null" to specify another time zone?
The following page refers to lpTimeZoneInformation as a pointer to a TIME_ZONE_INFORMATION structure that specifies the time zone for the localtime input to this function (lpLocalTime), but is is unclear to me what kind of pointer this is.
I have tried 'Brisbane', 'E. Australia Standard Time', '10:00' and '+10:00' but none returns the expected value.
https://learn.microsoft.com/en-us/windows/win32/api/timezoneapi/nf-timezoneapi-tzspecificlocaltimetosystemtime
ITOH and HTOI are Integer TO Hex and vice-versa conversion functions
localtime and systemtime structures work, I tried to replicate that for the time_zone_information part but without success so far
As it stands, the return value is 13.20
Thanks for any help!
d=new date("31/12/2020 5:08")
offset1=getLocalTimeOffset(d)/60
function getLocalTimeOffset(d_in)
// todo typechecking of the parameter
extern clogical TzSpecificLocalTimeToSystemTime(cptr,cptr,cptr) kernel32
extern culong GetLastError(cvoid) kernel32
local systemtime,localtime,tmp
localtime = replicate(chr(0),16)
systemtime = replicate(chr(0),16)
TZI = replicate(chr(0),16)
TZIa=itoh(-600,4)
TZIb=itoh(-60,4)
TZI.setbyte(1,htoi(left(TZIa,2)))
TZI.setbyte(0,htoi(right(TZIa,2)))
TZI.setbyte(9,htoi(left(TZIb,2)))
TZI.setbyte(8,htoi(right(TZIb,2)))
tmp = itoh(d_in.year,4)
localtime.setbyte(1,htoi(left(tmp,2))) // fill the systemtime structure
localtime.setbyte(0,htoi(right(tmp,2))) // seconds and ms are of no concern
localtime.setbyte(2,d_in.month+1)
localtime.setbyte(4,d_in.day)
localtime.setbyte(6,d_in.date)
localtime.setbyte(8,d_in.hour)
localtime.setbyte(10,d_in.minute)
if TzSpecificLocalTimeToSystemTime(TZI,localtime,systemtime) = 0
tmp = getlasterror() ; ? "Error: "+tmp ; return 9999
endif
tmp = sign(d_in.date-systemtime.getbyte(6))*24*60 // consider day boundary
if (d_in.date = 1 or systemtime.getbyte(6) = 1) and (d_in.month+1 <> systemtime.getbyte(2))
tmp = -tmp // adjust for month boundaries
endif
tmp += (d_in.hour - systemtime.getbyte(8))*60
tmp += d_in.minute - systemtime.getbyte(10)
return tmp
(Too long for a comment.)   The first parameter to TzSpecificLocalTimeToSystemTime must be either NULL, or otherwise point to a TIME_ZONE_INFORMATION structure, filled-in with the target timezone data from HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones per Remarks on the same page.
In OP's case, Brisbane falls under the E. Australia Standard Time key, and TZI data parses as:
typedef struct _REG_TZI_FORMAT
{
LONG Bias; // -600 A8 FD FF FF
LONG StandardBias; // 0 00 00 00 00
LONG DaylightBias; // -60 C4 FF FF FF
SYSTEMTIME StandardDate; // n/a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SYSTEMTIME DaylightDate; // n/a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
} REG_TZI_FORMAT;
Following is the C code to fill-in a TIME_ZONE_INFORMATION structure with the same data, and successfully convert a Brisbane local time to UTC:
#include <windows.h>
#include <stdio.h>
int main()
{
TIME_ZONE_INFORMATION tzEAST = // [offset] bytes
{
-600, // LONG Bias; [0] A8 FD FF FF
{ 0 }, // WCHAR StandardName[32]; [4] 00 .. 00
{ 0 }, // SYSTEMTIME StandardDate; [68] 00 .. 00
0, // LONG StandardBias; [84] 00 00 00 00
{ 0 }, // WCHAR DaylightName[32]; [88] 00 .. 00
{ 0 }, // SYSTEMTIME DaylightDate; [152] 00 .. 00
-60 // LONG DaylightBias; [168] C4 FF FF FF
};
SYSTEMTIME stEAST = { 2021, 1, 1, 4, 12 }, stUTC = { 0 };
if(!TzSpecificLocalTimeToSystemTime(&tzEAST, &stEAST, &stUTC)) return 1;
printf("EAST %d-%02d-%02d %02d:%02d:%02d = UTC %d-%02d-%02d %02d:%02d:%02d\n",
stEAST.wYear, stEAST.wMonth, stEAST.wDay, stEAST.wHour, stEAST.wMinute, stEAST.wSecond,
stUTC.wYear, stUTC.wMonth, stUTC.wDay, stUTC.wHour, stUTC.wMinute, stUTC.wSecond);
return 0;
}
Output:
EAST 2021-01-04 12:00:00 = UTC 2021-01-04 02:00:00
[ EDIT ]   Following is my guess of what the dBase code might look like. Just a guess, and nothing more than a guess, since I don't actually know dBase beyond what's been posted here.
tzBias = itoh(-600, 8)
tzDstBias = itoh( -60, 8)
tzi = replicate(chr(0), 86) // 86*2 = 172 = sizeof TIME_ZONE_INFORMATION
tzi.setbyte( 0, htoi(substring( tzBias, 6, 8))) // [ 0] LONG Bias;
tzi.setbyte( 1, htoi(substring( tzBias, 4, 6)))
tzi.setbyte( 2, htoi(substring( tzBias, 2, 4)))
tzi.setbyte( 3, htoi(substring( tzBias, 0, 2)))
tzi.setbyte(168, htoi(substring(tzDstBias, 6, 8))) // [168] LONG DaylightBias;
tzi.setbyte(169, htoi(substring(tzDstBias, 4, 6)))
tzi.setbyte(170, htoi(substring(tzDstBias, 2, 4)))
tzi.setbyte(171, htoi(substring(tzDstBias, 0, 2)))
if TzSpecificLocalTimeToSystemTime(tzi, localtime, systemtime) = 0 // ...
[ EDIT #2 courtesy OP ]   The working dBase code to fill the structure is the following:
tzi.setbyte( 0, htoi(substr(tzBias, 7, 2))) // [ 0] LONG Bias
tzi.setbyte( 1, htoi(substr(tzBias, 5, 2)))
tzi.setbyte( 2, htoi(substr(tzBias, 3, 2)))
tzi.setbyte( 3, htoi(substr(tzBias, 1, 2)))
tzi.setbyte(168, htoi(substr(tzDstBias, 7,2))) // [168] LONG DaylightBias
tzi.setbyte(169, htoi(substr(tzDstBias, 5,2)))
tzi.setbyte(170, htoi(substr(tzDstBias, 3,2)))
tzi.setbyte(171, htoi(substr(tzDstBias, 1,2)))

Re-attempting BASIC 6502 N-byte integer addition?

I initially (asked for help) and wrote a BASIC program in the 6502 pet emulator which added two n-byte integers. However, my feedback was that it was simply adding two 16 bit integers (not adding n-byte integers).
Can anyone help me understand this feedback by looking at my code and point me in the right direction to make a program that adds two n-byte integers?
Thank You for the collaboration!
Documentation:
Adds two n-byte integers using absolute indexed addressing. The addends begin at memory locations $0600, $0700 and the answer is at $0800. Byte length of the integers is at $0600 (¢ —> 256)
Machine Code:
18 a2 00 ac 00 06 bd 00
07 7d 00 08 9d 00 09 e8
00 88 00 d0
Op Codes, Documentation, Variables:
A1 = $0600
B1 = $0700
B2 = $0800
Z1 = $0900
[START] = $0500
CLC 18 // loads x with 0
LDX A2 00 // loads length on Y
LDY A1 AC 00 06 // load first operand
loop: LDA B1, x BD 00 07 // adds second operand
ADC B2, x 7D 00 08 // store result
STA Z1, x 9D 00 09 // go to next byte
INX E8 00 // count how many are left
DEY 88 00 // do more if needed
BNE loop D0
It looked to me like your code does what you claim -- adds two N byte operands in little-endian byte order. I vaguely remembered the various addressing modes of the 6502 from my misspent youth and the code seems fine. X is used to index the current byte from the two numbers, Y is a counter for the length of the operands in bytes and you loop over those bytes, stored at addresses 0x0700 and 0x0800 and write the result at address 0x0900.
Rather than get the Commodore 64 out of the attic and try it out I used an online virtual 6502 simulator. On this site we can set the memory address and load the byte values in. They even link to a page to assemble opcodes too. So setting the memory locations 0x0600 to "04" and both 0x0700 and 0x0800 to "04 03 02 01" we should see this code add these two 32 bit values (0x01020304 + 0x01020304 == 0x02040608).
Stepping through the code by clicking on the PC register and setting it to 0x0500 and then single stepping we see there is a bug in your machine code. After INX which compiles to E8 we hit a spurious 0x00 value(BRK) which terminates. The corrected code as below runs to completion and the expected value is seen by reading the memory at 0x0900.
0000 CLC 18
0001 LDX #$00 A2 00
0003 LDY $0600 AC 00 06
0006 LOOP: LDA $0700,X BD 00 07
0009 ADC $0800,X 7D 00 08
000C STA $0900,X 9D 00 09
000F INX E8
0010 DEY 88
0011 BNE LOOP: D0 F3
Memory dump:
:0900 08 06 04 02 00 00 00 00
:0908 00 00 00 00 00 00 00 00

Resources