How do I pretty-print structures (specifically Vecs) in rust-gdb or plain gdb? Whenever I call p some_vector I get this result:
collections::vec::Vec<usize> = {buf = alloc::raw_vec::RawVec<usize> = {ptr = core::ptr::Unique<usize> = {pointer = core::nonzero::NonZero<*const usize> = {
0x7ffff640d000}, _marker = core::marker::PhantomData<usize>}, cap = 16}, len = 10}
This is just unreadable. Is there any way to get a result showing the contents of the Vec? I am using Rust 1.12 and GDB 7.12.
Related
I try to perform a test run for an XDP BPF program. The BPF program uses the bpf_xdp_adjust_meta() helper, to adjust the meta data.
I tried:
to run bpf_prog_test_run()
to run bpf_prog_test_run_xattr()
1. bpf_prog_test_run()
(The first time I tried my bpf program's debug messages told me that adjusting the data_meta field failed.) Now it can adjust the data_meta, but the iph.ihl field is apparently not set to 5.
2. bpf_prog_test_xattr()
This always returns -1, so something failed.
The Code
packet:
struct ipv4_packet pkt_v4 = {
.eth.h_proto = __bpf_constant_htons(ETH_P_IP),
.iph.ihl = 5,
.iph.daddr = __bpf_constant_htonl(33554442),
.iph.saddr = __bpf_constant_htonl(50331658),
.iph.protocol = IPPROTO_TCP,
.iph.tot_len = __bpf_constant_htons(MAGIC_BYTES),
.tcp.urg_ptr = 123,
.tcp.doff = 5,
};
test attribute:
__u32 size, retval, duration;
char data_out[128];
struct xdp_md ctx_in, ctx_out;
struct bpf_prog_test_run_attr test_attr = {
.prog_fd = prog_fd,
.repeat = 100,
.data_in = &pkt_v4,
.data_size_in = sizeof(&pkt_v4),
.data_out = &data_out,
.data_size_out = sizeof(data_out),
.ctx_in = &ctx_in,
.ctx_size_in = sizeof(ctx_in),
.ctx_out = &ctx_out,
.ctx_size_out = sizeof(ctx_out),
.retval = &retval,
.duration = &duration,
};
test execution:
bpf_prog_test_run(main_prog_fd, 1, &pkt_v4, sizeof(pkt_v4), &data_out, &size, &retval, &duration) -> iph.ihl field is 0.
bpf_prog_test_run_xattr(&test_attr) -> returns -1.
Note
The program was already successfully attached to the hook point of a real network interface and ran as intended. I just replaced the code that attaches the program to the hook point with the above code for testing.
The struct ipv4_packet pkt_v4 was not packed.
When I replace __packed with __attribute__ ((__packed__)) it works.
For information what happens without packing, see for example this question.
Basically the compiler adds padding bytes which leads to the fields in the packet being in different places than expected.
I have an inspection application in pharma. I need to inspect batch numbers and Pharmacodes on labels using OpenCV. I thought of using PyZbar but it doesn't support Pharmacode. How can I add more codes, like Pharmacode, to PyZbar?
Many thanks
Pharmacode (also known as code32) is implemented as code39 using a radix-32 "compression scheme". So you can use PyZbar, with the understanding that you will get a code39 decode from the library and then have to convert the value from base-32 to base-10.
Note that there is no way of knowing whether a code39 "read" is actually code32 or just plain code39.
This is some javascript that converts a code39 value to its original code32 string:
// code32 radix set (missing the vowels AEIO)
const code32set = '0123456789BCDFGHJKLMNPQRSTUVWXYZ';
function code39toCode32(val) {
if (/[^0-9BCDFGHJKLMNPQRSTUVWXYZ]/.test(val)) {
throw new Error("Not code32");
}
let res = 0;
for (let i = 0; i < val.length; i++) {
res = res * 32 + code32set.indexOf(val[i]);
}
let code32 = '' + res;
if (code32.length < 9) {
code32 = ('000000000' + code32).slice(-9);
}
return code32;
}
That function should translate to python pretty easily.
I have this:
TDictionaryStrInt = specialize TFPGMap<string, integer>;
Can somebody tell me how the heck can I debug the Map, the Key/Value pairs?
I just see only reference to a memory address, but I really need to see the items.
Watch, local variables does not helps me.
I can see only this:
<TDictionaryStrStr> = {
<TFPSMAP> = {
<TFPSLIST> = {
<TOBJECT> = {
_vptr$ = {
0x5612ec,
0x230b988}},
FLIST = ,
FCOUNT = 1,
FCAPACITY = 4,
FITEMSIZE = 8},
FKEYSIZE = 4,
FDATASIZE = 4,
FDUPLICATES = DUPIGNORE,
FSORTED = false,
FONKEYPTRCOMPARE = $426b70 <TFPGMAP$2$CRC36DB32B4__KEYCOMPARE>,
FONDATAPTRCOMPARE = $523e30
<FGL$_$TFPSMAP_$__$$_BINARYCOMPAREDATA$POINTER$POINTER$$LONGINT>},
FONKEYCOMPARE = $0,
FONDATACOMPARE = $0}
redim = 2;
# Loading data
iris_data = readdlm("iris_data.csv");
iris_target = readdlm("iris_target.csv");
# Center data
iris_data = broadcast(-, iris_data, mean(iris_data, 1));
n_data, n_dim = size(iris_data);
Sw = zeros(n_dim, n_dim);
Sb = zeros(n_dim, n_dim);
C = cov(iris_data);
classes = unique(iris_target);
for i=1:length(classes)
index = find(x -> x==classes[i], iris_target);
d = iris_data[index,:];
classcov = cov(d);
Sw += length(index) / n_data .* classcov;
end
Sb = C - Sw;
evals, evecs = eig(Sw, Sb);
w = evecs[:,1:redim];
new_data = iris_data * w;
This code just does LDA (linear discriminant analysis) for the iris_data.
Reduct the dimensions of the iris_data to 2.
It will takes about 4 seconds, but Python(numpy/scipy) only takes about 0.6 seconds.
Why?
This is from the first page, second paragraph of the introduction in the Julia Manual:
Because Julia’s compiler is different from the interpreters used for languages like Python or R, you may find that Julia’s performance is unintuitive at first. If you find that something is slow, we highly recommend reading through the Performance Tips section before trying anything else. Once you understand how Julia works, it’s easy to write code that’s nearly as fast as C.
Excerpt:
Avoid global variables
A global variable might have its value, and therefore its type, change at any point. This makes it difficult for the compiler to optimize code using global variables. Variables should be local, or passed as arguments to functions, whenever possible.
Any code that is performance critical or being benchmarked should be inside a function.
We find that global names are frequently constants, and declaring them as such greatly improves performance
Knowing that the script (all procedural top level code) style is so pervasive among many scientific computing users, I would recommend you to at least wrap the whole file inside a let expression for starters (let introduces a new local scope), ie:
let
redim = 2
# Loading data
iris_data = readdlm("iris_data.csv")
iris_target = readdlm("iris_target.csv")
# Center data
iris_data = broadcast(-, iris_data, mean(iris_data, 1))
n_data, n_dim = size(iris_data)
Sw = zeros(n_dim, n_dim)
Sb = zeros(n_dim, n_dim)
C = cov(iris_data)
classes = unique(iris_target)
for i=1:length(classes)
index = find(x -> x==classes[i], iris_target)
d = iris_data[index,:]
classcov = cov(d)
Sw += length(index) / n_data .* classcov
end
Sb = C - Sw
evals, evecs = eig(Sw, Sb)
w = evecs[:,1:redim]
new_data = iris_data * w
end
But I would also urge you to refactor that into small functions and then compose a main function that calls the rest, something like this, notice how this refactor makes your code general and reusable (and fast):
module LinearDiscriminantAnalysis
export load_data, center_data
"Returns data and target Matrices."
load_data(data_path, target_path) = (readdlm(data_path), readdlm(target_path))
function center_data(data, target)
data = broadcast(-, data, mean(data, 1))
n_data, n_dim = size(data)
Sw = zeros(n_dim, n_dim)
Sb = zeros(n_dim, n_dim)
C = cov(data)
classes = unique(target)
for i=1:length(classes)
index = find(x -> x==classes[i], target)
d = data[index,:]
classcov = cov(d)
Sw += length(index) / n_data .* classcov
end
Sb = C - Sw
evals, evecs = eig(Sw, Sb)
redim = 2
w = evecs[:,1:redim]
return data * w
end
end
using LinearDiscriminantAnalysis
function main()
iris_data, iris_target = load_data("iris_data.csv", "iris_target.csv")
result = center_data(iris_data, iris_target)
#show result
end
main()
Notes:
You don't need all those semicolons.
anonymous functions are currently slow but that will change in v0.5. You can use FastAnonymous for now, if performance is critical.
In summary read carefully and take into account all the performance tips.
main is just a name, it could be anything else you like.
I am having trouble with the C routines in libb64, here is my code:
base64_encodestate state;
int outBufLen = 2 * nInBuf;
*outBuf = new char[outBufLen];
base64_init_encodestate(&state);
int r1 = base64_encode_block(inBuf, nInBuf, *outBuf, &state);
int r2 = base64_encode_blockend(*outBuf, &state);
base64_init_encodestate(&state);
This puts the = at the beginning, not at the end.
So I tried this:
base64_encodestate state;
int outBufLen = 2 * nInBuf;
*outBuf = new char[outBufLen];
base64_init_encodestate(&state);
int r1 = base64_encode_block(inBuf, nInBuf, *outBuf, &state);
int r2 = base64_encode_blockend(*outBuf+ r1, &state);
base64_init_encodestate(&state);
This works, but not for "large" (~800KB text) files, then it skips the end = entirely. In that case base64_encode_blockend(code_out,state), enters case step_C where state->result = 0. I tried writing the b64 data to a file using the size reported by the libb64 functions, but it misses the end or is impartial. Im not sure.
I'm pretty much fed up with this. I based my code on the struct encode and decode.
Also do anyone know if there is a Windows API for base64 encode/decode? I am not using any C++ standard stuff, thats why I don't use the structs.