How to read a program memory range with pk3cmd - microchip

I need to read a ** program memory range** in a microchip microcontroller using the command line pk3cmd.exe, but I have errors or pk3cmd read all memory program.
I have used the next arguments:
PK3CMD -P32MX440F512H -GPFC:\DemoCode.Hex -N1d000000,1d0000FF -V3.3
Result: Incorrect number format for radix 10
PK3CMD -P32MX440F512H -GPFC:\DemoCode.Hex -N0x1d000000,0x1d0000ff -V3.3
Result: Value must have a value 0x200*n - 1. Example: 0x1ff
PK3CMD -P32MX440F512H -GPFC:\DemoCode.Hex -N486539264,256 -V3.3
Result: Value not in range [0x1d000000, 0x1d07ffff]
PK3CMD -P32MX440F512H -GPFC:\DemoCode.Hex -N486539264,486539519 -V3.3
Result: Value must have a value 0x200*n - 1. Example: 0x1ff
PK3CMD -P32MX440F512H -GPFC:\DemoCode.Hex -N0x1d000000,0x100 -V3.3
Result: Value not in range [0x1d000000, 0x1d07ffff]

Finally I did it. After many attempts, I decided to disassemble the jar files of MPLAB IPE, and I found the solution reading the java code.
PK3CMD -P32MX440F512H -GP1d000000-1d0000ff -V3.3
The response is sent to the screen, but you can redirect the standard output to a file.

Related

Reading a folder of log files, and calculating the event durations for unique ID's

I have an air gapped system (so limited in software access) that generates usage logs daily. The logs have unique ID's for devices that I've managed to scrape in the past and pump out to a CSV, to which I would then cleanup in LibreCalc (related to this question I asked here - https://superuser.com/questions/1732415/find-next-matching-event-in-log-and-compare-timings) and get event durations for each one.
This is getting arduous as more devices are added so I wish to automate the calculation of the total durations for each device, and how many events occurred for that device. I've had some suggestions of using out/awk/sed and I'm a bit lost on how to implement it.
Log Example
message="device02 connected" event_ts=2023-01-10T09:20:21Z
message="device05 connected" event_ts=2023-01-10T09:21:31Z
message="device02 disconnected" event_ts=2023-01-10T09:21:56Z
message="device04 connected" event_ts=2023-01-10T11:12:28Z
message="device05 disconnected" event_ts=2023-01-10T15:26:36Z
message="device04 disconnected" event_ts=2023-01-10T18:23:32Z
I already have a bash script that scrapes these events from the log files in the folder and then outputs it all to a csv.
#/bin/bash
#Just a datetime stamp for the flatfile
now=$(date +”%Y%m%d”)
#Log file path, also where I define what month to scrape
LOGFILE=’local.log-202301*’
#Shows what log files are getting read
echo $LOGFILE \n
#Output line by line to csv
awk ‘(/connect/ && ORS=”\n”) || (/disconnect/ && ORS=RS) {field1_var=$1” “$2” “$3”,”; print field1_var}’ $LOGFILE > /home/user/logs/LOG_$now.csv
Ideally I'd like to keep that process so I can manually inspect the file if necessary. But ultimately I'd prefer to automate the event calculations to produce something like below:
Desired Output Example
Device Total Connection Duration Total Connections
device01 0h 0m 0s 0
device02 0h 1m 35s 1
device03 0h 0m 0s 0
device04 7h 11m 4s 1
device05 6h 5m 5s 1
Hopefully thats enough info, any help or pointers would be greatly appreciated. Thanks.
This isn't based on your script at all, since I didn't get it to produce a CSV, but anyway...
Here's an AWK script that computes the desired result for the given example log file:
function time_lapsed(from, to) {
gsub(/[^0-9 ]/, " ", from);
gsub(/[^0-9 ]/, " ", to);
return mktime(to) - mktime(from);
}
BEGIN { OFS = "\t"; }
(/ connected/) {
split($1, a, "=\"", _);
split($3, b, "=", _);
device_connected_at[a[2]] = b[2];
device_connection_count[a[2]]++;
}
(/disconnected/) {
split($1, a, "=\"", _);
split($3, b, "=", _);
device_connection_duration[a[2]]+=time_lapsed(device_connected_at[a[2]], b[2]);
}
END {
print "Device","Total Connection Duration", "Total Connections";
for (device in device_connection_duration) {
print device, strftime("%Hh %Mm %Ss", device_connection_duration[device]), device_connection_count[device];
};
}
I used it on this example log file
message="device02 connected" event_ts=2023-01-10T09:20:21Z
message="device05 connected" event_ts=2023-01-10T09:21:31Z
message="device02 disconnected" event_ts=2023-01-10T09:21:56Z
message="device04 connected" event_ts=2023-01-10T11:12:28Z
message="device06 connected" event_ts=2023-01-10T11:12:28Z
message="device05 disconnected" event_ts=2023-01-10T15:26:36Z
message="device02 connected" event_ts=2023-01-10T19:20:21Z
message="device04 disconnected" event_ts=2023-01-10T18:23:32Z
message="device02 disconnected" event_ts=2023-01-10T21:41:33Z
And it produces this output
Device Total Connection Duration Total Connections
device02 03h 22m 47s 2
device04 08h 11m 04s 1
device05 07h 05m 05s 1
You can pass this program to awk without any flags. It should just work (given you didn't mess around with field and record separators somewhere in your shell session).
Let me explain what's going on:
First we define the time_lapsed function. In that function we first convert the ISO8601 timestamps into the format that mktime can handle (YYYY MM DD HH MM SS), we simply drop the offset since it's all UTC. We then compute the difference of the Epoch timestamps that mktime returns and return that result.
Next in the BEGIN block we define the output field separator OFS to be a tab.
Then we define two rules, one for log lines when the device connected and one for when the device disconnected.
Due to the default field separator the input to these rules looks like this:
$1: message="device02
$2: connected"
$3: event_ts=2023-01-10T09:20:21Z
We don't care about $2. We use split to get the device identifier and the timestamp from $1 and $3 respectively.
In the rule for a device connecting, using the device identifier as the key, we then store when the device connected and increase the connection count for that device. We don't need to initially assign 0 because the associative arrays in awk return "" for fields that contain no record which is coerced to 0 by incrementing it.
In the rule for a device disconnecting we compute the time lapsed and add that to the total time elapsed for that device.
Note that this requires every connect to have a matching disconnect in the logs. I.e., this is very fragile, a missing connect log line will mess up the calculation of the total connection time. A missing disconnect log line with increase the connection count but not the total connection time.
In the END rule we print the desired Output header and for every entry in the associative array device_connection_duration we print the device identifier, total connection duration and total connection count.
I hope this gives you some ideas on how to solve your task.

strconv.ParseInt fails if number starts with 0

I'm currently having issues parsing some numbers starting with 0 in Go.
fmt.Println(strconv.ParseInt("0491031", 0, 64))
0 strconv.ParseInt: parsing "0491031": invalid syntax
GoPlayground: https://go.dev/play/p/TAv7IEoyI8I
I think this is due to some base conversion error, but I don't have ideas about how to fix it.
I'm getting this error parsing a 5GB+ csv file with gocsv, if you need more details.
[This error was caused by the GoCSV library that doesn't allow to specify a base for the numbers you're going to parse.]
Quoting from strconv.ParseInt()
If the base argument is 0, the true base is implied by the string's prefix following the sign (if present): 2 for "0b", 8 for "0" or "0o", 16 for "0x", and 10 otherwise. Also, for argument base 0 only, underscore characters are permitted as defined by the Go syntax for integer literals.
You are passing 0 for base, so the base to parse in will be inferred from the string value, and since it starts with a '0' followed by a non '0', your number is interpreted as an octal (8) number, and the digit 9 is invalid there.
Note that this would work:
fmt.Println(strconv.ParseInt("0431031", 0, 64))
And output (try it on the Go Playground):
143897 <nil>
(Octal 431031 equals 143897 decimal.)
If your input is in base 10, pass 10 for base:
fmt.Println(strconv.ParseInt("0491031", 10, 64))
Then output will be (try it on the Go Playground):
491031 <nil>

MPI_ALLreduce with Fortran and 2 bytes integer

I'm trying to do an MPI sum of 2 bytes integer:
INTEGER, PARAMETER :: SIK2 = SELECTED_INT_KIND(2)
INTEGER(SIK2) :: s_save(dim)
Indeed its an array which takes integer values from 1 to 48 max, so 2 bytes is enough for memory reasons.
Therefore I tried the following:
CALL MPI_TYPE_CREATE_F90_INTEGER(SIK2, int2type, ierr)
CALL MPI_ALLreduce(MPI_IN_PLACE, s_save, nkpt_in, int2type, MPI_SUM, world_comm, ierr)
This works well for Gfortran + openmpi.
However in the case of intel I get a crash:
MPI_Allreduce(1000)......: MPI_Allreduce(sbuf=MPI_IN_PLACE, rbuf=0x55d2160, count=987, dtype=USER<f90_integer>, MPI_SUM, MPI_COMM_WORLD) failed
MPIR_SUM_check_dtype(106): MPI_Op MPI_SUM operation not defined for this datatype
Is there a proper (or recommended) way to do this so that it works for most compilers?

How to get range of dates that are representable/accepted by date(1)/mktime(3)?

On my modern 64bit Mac, the date 1901-12-14 is the earliest accepted by the following command:
date -ju -f "%F" "1901-12-14" "+%s"
I checked the source for the macOS date command here (Apple Open Source) and it is a failed mktime call that gives date: nonexistent time error for earlier dates.
I've looked over the source for mktime here (Apple Open Source) and I think that its a integer representation issue, but I'm not sure.
How can I find or compute the accepted range of dates of the date command (really of mktime)?
And if I wanted to get the Unix time for a date that can't be represented by mktime internals, what other libraries or functions can handle earlier dates?
The current macOS (OS X) implementation of mktime(3) has a minimum supported Unix time of INT32_MIN (-2147483648). This is because localtime.c:2102 assumes the result will fit in a 32-bit integer if the value is less than INT32_MAX:
/* optimization: see if the value is 31-bit (signed) */
t = (((time_t) 1) << (TYPE_BIT(int) - 1)) - 1;
bits = ((*funcp)(&t, offset, &mytm) == NULL || tmcomp(&mytm, &yourtm) < 0) ? TYPE_BIT(time_t) - 1 : TYPE_BIT(int) - 1;
Between 1901-12-14 and 1901-12-13 the Unix time dips below INT32_MIN, requires more than 32 bits, and is still less than INT32_MAX causing the function run out of bits.
This is not much of an issue considering that values before the year 1900 are explicitly disallowed by localtime.c:2073:
/* Don't go below 1900 for POLA */
if (yourtm.tm_year < 0)
return WRONG;
(POLA: Principle Of Least Astonishment)
Additionally, time_t is not required to be signed.

Meaning of mxlc in Oracle Trace file

I am seeing the following in my trace file:
Bind#3 oacdty=01 mxl=128(35) mxlc=36 mal=00 scl=00 pre=00
oacflg=03 fl2=1000010 frm=01 csi=31 siz=0 off=168
kxsbbbfp=ffffffff79f139a8 bln=128 avl=35 flg=01 value="1234 W
1234 West, West Groves City"
I am wondering what the mxlc value is?
I quote
Bind #n
oacdty - Datatype code
mxl - Maximum length of the bind variable value (private maximum length in parentheses)
mxlc - Unknown :(
mal - array length
scl - Scale
pre - Precision
oacflg - Special flag indicating bind options
fl2 - second part of oacflg
frm - Unknown :(
csi - Unknown :(
siz - Amount of memory to be allocated for this chunk
off - Offset into this chunk for this bind buffer
kxsbbbfp- Bind address
bln - Bind buffer length
avl - actual value length
flg - bind status flag
value - Value of the bind variable
Source (& snippet of the book)
The book also quotes-
There is currently no information on three parameters.
Which are mxlc,frm, and csi.
Summary
mxlc appears to be the maximum number of characters for the bind variable, but only if the variable uses character length semantics.
Method
I searched My Oracle Support for mxlc. Almost every article has mxlc=00, the only exceptions involve an NVARCHAR or NCHAR. The code below is based on the code from Document ID 552262.1. I changed the variable sizes (99 and 123 char) around, and each time mxlc was set to the variable size if character length semantics was used.
Code
create table t1(ncol1 nvarchar2(100), col1 varchar2(100));
alter session set timed_statistics = true;
alter session set statistics_level=all;
alter session set max_dump_file_size = unlimited;
alter session set events '10046 trace name context forever,level 4';
VAR nvar1 NVARCHAR2(99)
VAR var1 VARCHAR2(123 char)
EXEC :nvar1 := 'nvarchar'
EXEC :var1 := 'varchar'
SELECT * FROM T1 WHERE ncol1 = :nvar1 and col1 = :var1;
ALTER SESSION SET EVENTS '10046 trace name context off';
Results:
Bind#0
oacdty=01 mxl=2000(198) mxlc=99 mal=00 scl=00 pre=00
oacflg=03 fl2=1000010 frm=02 csi=2000 siz=4000 off=0
kxsbbbfp=0e702edc bln=2000 avl=16 flg=05
value=0 6e 0 76 0 61 0 72 0 63 0 68 0 61 0 72
Bind#1
oacdty=01 mxl=2000(369) mxlc=123 mal=00 scl=00 pre=00
oacflg=03 fl2=1000010 frm=01 csi=873 siz=0 off=2000
kxsbbbfp=0e7036ac bln=2000 avl=07 flg=01
value="varchar"
More Questions
Normally the relationship between mxl and mxlc makes sense. For a NVARCHAR, UTF16 on my system, there will be 2 bytes per character, thus 198 and 99. My database is UTF8, a character could take up to 4 bytes. Maybe Oracle guesses the average size will be 3 bytes, thus 123 and 369. Obviously it could be more than 369, perhaps that's just the initial memory allocated, and it can grow later?
But your numbers, 36 and 35, don't make sense to me. Surely the number of bytes can never be LESS than the number of characters? Is Oracle making a bad guess, or is some client program sending in bad data?

Resources