How to generate a deterministic set of UUIDs in golang - go

I'm doing some testing and it would be useful to have a known set of UUIDs that are getting used by our code. However, I'm having trouble figuring out how to create a deterministic set of UUIDs in golang.
I've tried a few approaches, but neither seemed to work:
type KnownReader struct {
store *Store
}
type Store struct {
val uint16
}
func (r KnownReader) Read(p []byte) (n int, err error) {
ret := r.store.val
r.store.val = ret + 1
fmt.Printf("\nStore: %v", r.store.val)
p = make([]byte, 4)
binary.LittleEndian.PutUint16(p, uint16(ret))
fmt.Printf("\nreader p: % x", p)
return binary.MaxVarintLen16, nil
}
func main() {
r := KnownReader{
store: &Store{val: 111},
}
uuid.SetRand(r)
u, _ := uuid.NewRandomFromReader(r)
fmt.Printf("\n%v",u)
u, _ = uuid.NewRandomFromReader(r)
fmt.Printf("\n%v",u)
}
---- OUTPUT ----
Store: 1
reader p: 00 00 00 00
Store: 2
reader p: 01 00 00 00
Store: 3
reader p: 02 00 00 00
Store: 4
reader p: 03 00 00 00
Store: 5
reader p: 04 00 00 00
Store: 6
reader p: 05 00 00 00
00000000-0000-4000-8000-000000000000
Store: 7
reader p: 06 00 00 00
Store: 8
reader p: 07 00 00 00
Store: 9
reader p: 08 00 00 00
Store: 10
reader p: 09 00 00 00
Store: 11
reader p: 0a 00 00 00
Store: 12
reader p: 0b 00 00 00
00000000-0000-4000-8000-000000000000
As you can see, the UUID, does not change between calls
I also tried using uuid.FromBytes, but that didn't seem to work either:
func getbytes(num uint16) []byte {
p := make([]byte, 4)
binary.LittleEndian.PutUint16(p, num)
fmt.Printf("\ngetbytes p: % x", p)
return p
}
func main() {
var i uint16 = 0
fmt.Printf("\nout getbytes: % x", getbytes(i))
u, _ := uuid.FromBytes(getbytes(i))
i = i + 1
fmt.Printf("\nUUID: %v", u)
fmt.Printf("\nout getbytes: % x", getbytes(i))
u, _ = uuid.FromBytes(getbytes(i))
fmt.Printf("\nUUID: %v", u)
}
---- OUTPUT ----
getbytes p: 00 00 00 00
out getbytes: 00 00 00 00
getbytes p: 00 00 00 00
UUID: 00000000-0000-0000-0000-000000000000
getbytes p: 01 00 00 00
out getbytes: 01 00 00 00
getbytes p: 01 00 00 00
UUID: 00000000-0000-0000-0000-000000000000
As you can see the UUIDs are still the same here as well.
So, is there something I'm missing? How can I get a consistent set of UUIDs?
Thanks

Thanks Adrian, I think I figured out the answer:
rnd := rand.New(rand.NewSource(1))
uuid.SetRand(rnd)
u, _ = uuid.NewRandomFromReader(rnd)
fmt.Printf("\n%v", u)
u, _ = uuid.NewRandomFromReader(rnd)
fmt.Printf("\n%v", u)
--- OUTPUT ---
52fdfc07-2182-454f-963f-5f0f9a621d72
9566c74d-1003-4c4d-bbbb-0407d1e2c649

Related

MD5 implementation in Ruby

I am trying to implement MD5 in Ruby, following the pseudo code written in the wiki.
Here is the codes, not working well:
# : All variables are unsigned 32 bit and wrap modulo 2^32 when calculating
# s specifies the per-round shift amounts
s = []
s.concat([7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22])
s.concat([5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20])
s.concat([4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23])
s.concat([6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21])
# Use binary integer part of the sines of integers (Radians) as constants:
k = 0.upto(63).map do |i|
(Math.sin(i+1).abs * 2 ** 32).floor
end
# Initialize variables:
a0 = 0x67452301 # A
b0 = 0xefcdab89 # B
c0 = 0x98badcfe # C
d0 = 0x10325476 # D
message = File.read(ARGV[0])
# Pre-processing
# with bit stream "string" (MSB)
bits = message.unpack('B*')[0]
org_len = bits.size
bits << '1' # adding a single 1 bit
bits << '0' while !((bits.size + 64) % 512 == 0) # padding with zeros
bits << (org_len % 2 ** 64).to_s(2).rjust(64, '0')
message32 = [bits].pack('B*').unpack('V*') # ?
# 1. bits.scan(/(.{8})/).flatten.map { |b| b.to_i(2).to_s(16).rjust(2, '0') }.each_slice(16) { |c| puts c.join(' ')} => for test
# 2. [bits].pack('B*').unpack('N*') == bits.scan(/(.{32})/).flatten.map { |b| b.to_i(2).to_s(16).rjust(8, '0').to_i(16) } => true
# custom operations for wrapping the results as modulo 2 ** 32
class Integer
def r
self & 0xFFFFFFFF
end
def rotate_l(count)
(self << count).r | (self >> (32 - count))
end
end
# Process the message in successive 512-bit chunks:
message32.each_slice(16).each do |m|
a = a0
b = b0
c = c0
d = d0
0.upto(63) do |i|
if i < 16
f = d ^ (b & (c ^ d))
g = i
elsif i < 32
f = c ^ (d & (b ^ c))
g = (5 * i + 1) % 16
elsif i < 48
f = b ^ c ^ d
g = (3 * i + 5) % 16
elsif i < 64
f = c ^ (b | ~d)
g = (7 * i) % 16
end
f = (f + a + k[i] + m[g]).r
a = d
d = c
c = b
b = (b + f.rotate_l(s[i])).r
end
a0 = (a0 + a).r
b0 = (b0 + b).r
c0 = (c0 + c).r
d0 = (d0 + d).r
end
puts [a0, b0, c0, d0].pack('V*').unpack('H*')
I'm testing with messages, well known for collision in just one block:
Message 1
Message 2
They are resulted in the same value, but not correct:
❯ ruby md5.rb message1.bin
816922b82e2f8d5bd3abf90777ad72c9
❯ ruby md5.rb message2.bin
816922b82e2f8d5bd3abf90777ad72c9
❯ md5 message*
MD5 (/Users/hansuk/Downloads/message1.bin) = 008ee33a9d58b51cfeb425b0959121c9
MD5 (/Users/hansuk/Downloads/message2.bin) = 008ee33a9d58b51cfeb425b0959121c9
I have an uncertainty about pre-processing steps.
I checked the bit stream after pre-processing, with the comments on the line 34 and 35, the original message written in same and the padding bits are right:
❯ hexdump message1.bin
0000000 4d c9 68 ff 0e e3 5c 20 95 72 d4 77 7b 72 15 87
0000010 d3 6f a7 b2 1b dc 56 b7 4a 3d c0 78 3e 7b 95 18
0000020 af bf a2 00 a8 28 4b f3 6e 8e 4b 55 b3 5f 42 75
0000030 93 d8 49 67 6d a0 d1 55 5d 83 60 fb 5f 07 fe a2
0000040
(byebug) bits.scan(/(.{8})/).flatten.map { |b| b.to_i(2).to_s(16).rjust(2, '0') }.each_slice(16) { |c| puts c.join(' ')}
4d c9 68 ff 0e e3 5c 20 95 72 d4 77 7b 72 15 87
d3 6f a7 b2 1b dc 56 b7 4a 3d c0 78 3e 7b 95 18
af bf a2 00 a8 28 4b f3 6e 8e 4b 55 b3 5f 42 75
93 d8 49 67 6d a0 d1 55 5d 83 60 fb 5f 07 fe a2
80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 00
nil
What am I missed?
The one most classical mistake in implementing MD5 is botching endianness: the padded message and the message length are to be turned to 32-bit words per little-endian convention, so that message 'abc' in ASCII (0x61 0x62 0x63) is turned to a 16-word padded message block with m[0]=0x80636261, m[14]=0x18, and m[1…13,15]=0.
I never wrote anything in Ruby, but I get the feeling the code yields m[0]=0x61626380, m[15]=0x18, and m[1…14]=0.
Also: the 4-word result is to be turned to 16 bytes per little-endian convention too.

Reading sk_buff with ebpf inside dev_queue_xmit yields questionable data

I'm trying to capture outgoing ethernet frames on the local host before they are sent by inserting a kprobe into __dev_queue_xmit().
However, the bytes I extract from the sk_buff structure do not match the subsequently captured packets.
I only attempted it for linear skbs up to now, because I already get unexpected results there.
For example, my kprobe reported the following information during a call to __dev_queue_xmit():
COMM PID TGID LEN DATALEN
chronyd 1058 1058 90 0
3431c4b06a8b3c7c3f2023bd08006500d0a57f040f7f0000000000000000000000000000000000006018d11a0f7f00000100000000000000000000000000000060a67f040f7f0000000000000000000000000000000000004001
COMM is the name of the process which called the function,
PID is the calling thread's id and TGID its thread group id. LEN is the value of (skb->len - skb->data_len) and DATA_LEN is skb->data_len.
Next, the program has copied LEN (in this case 90) bytes starting at skb->data.
Since DATALEN is zero, this is a linear skb. Thus, those bytes should contain exactly the frame which is about to be sent, shouldn't they?
Well, Wireshark subsequently recorded this frame:
0000 34 31 c4 b0 6a 8b 3c 7c 3f 20 23 bd 08 00 45 00
0010 00 4c 83 93 40 00 40 11 d1 a2 c0 a8 b2 18 c0 a8
0020 b2 01 c8 07 00 7b 00 38 e5 b4 23 00 06 20 00 00
0030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0050 00 00 38 bc 17 13 12 4a 4c c0
The first 14 bytes, which are forming the ethernet header, match up perfectly as expected. Everything else doesn't match up at all.
The question now is: Why do the bytes not match up?
(Yes, I am certain the frame from Wireshark is indeed the one caused by this call to __dev_queue_xmit(). This is because only background programs using the network were running at the time, so the amount of outgoing traffic was rather small. Additionally, the captured frame contains, as expected, 90 bytes. Also, this frame holds an NTP payload, which is just what you'd expect from chronyd.)
My kernel version is 5.12.6-200.fc33.x86_64.
If you want to try it out yourself or have a closer look at my program, here it is:
from bcc import BPF
from ctypes import cast, POINTER, c_char
prog = """
#include <linux/sched.h>
#include <linux/skbuff.h>
struct xmit_event {
u64 ts;
u32 pid;
u32 tgid;
u32 len;
u32 datalen;
u32 packet_buf_ptr;
char comm[TASK_COMM_LEN];
u64 head;
u64 data;
u64 tail;
u64 end;
};
BPF_PERF_OUTPUT(xmits);
#define PACKET_BUF_SIZE 32768
# define PACKET_BUFS_PER_CPU 15
struct packet_buf {
char data[PACKET_BUF_SIZE];
};
BPF_PERCPU_ARRAY(packet_buf, struct packet_buf, PACKET_BUFS_PER_CPU);
BPF_PERCPU_ARRAY(packet_buf_head, u32, 1);
int kprobe____dev_queue_xmit(struct pt_regs *ctx, struct sk_buff *skb, void *accel_priv) {
if (skb == NULL || skb->data == NULL)
return 0;
struct xmit_event data = { };
u64 both = bpf_get_current_pid_tgid();
data.pid = both;
if (data.pid == 0)
return 0;
data.tgid = both >> 32;
data.ts = bpf_ktime_get_ns();
bpf_get_current_comm(&data.comm, sizeof(data.comm));
data.len = skb->len;
// Copy packet contents
int slot = 0;
u32 *packet_buf_ptr = packet_buf_head.lookup(&slot);
if (packet_buf_ptr == NULL)
return 0;
u32 buf_head = *packet_buf_ptr;
u32 next_buf_head = (buf_head + 1) % PACKET_BUFS_PER_CPU;
packet_buf_head.update(&slot, &next_buf_head);
struct packet_buf *ringbuf = packet_buf.lookup(&buf_head);
if (ringbuf == NULL)
return 0;
u32 skb_data_len = skb->data_len;
u32 headlen = data.len - skb_data_len;
headlen &= 0xffffff; // Useless, but validator demands it because "this unsigned(!) variable could otherwise be negative"
bpf_probe_read_kernel(ringbuf->data, headlen < PACKET_BUF_SIZE ? headlen : PACKET_BUF_SIZE, skb->data);
data.packet_buf_ptr = buf_head;
data.len = headlen;
data.datalen = skb_data_len;
data.head = (u64) skb->head;
data.data = (u64) skb->data;
data.tail = (u64) skb->tail;
data.end = (u64) skb->end;
xmits.perf_submit(ctx, &data, sizeof(data));
return 0;
}
"""
global b
def xmit_received(cpu, data, size):
global b
global py_packet_buf
ev = b["xmits"].event(data)
print("%-18d %-25s %-8d %-8d %-10d %-10d %-12d %-12d %-12d %-12d" % (ev.ts, ev.comm.decode(), ev.pid, ev.tgid, ev.len, ev.datalen, ev.head, ev.data, ev.tail, ev.end))
bs = cast(py_packet_buf[ev.packet_buf_ptr][cpu].data, POINTER(c_char))[:ev.len]
c = bytes(bs)
print(c.hex())
def observe_kernel():
# load BPF program
global b
b = BPF(text=prog)
print("%-18s %-25s %-8s %-8s %-10s %-10s %-12s %-12s %-12s %-12s" % ("TS", "COMM", "PID", "TGID", "LEN", "DATALEN", "HEAD", "DATA", "TAIL", "END"))
b["xmits"].open_perf_buffer(xmit_received)
global py_packet_buf
py_packet_buf = b["packet_buf"]
try:
while True:
b.perf_buffer_poll()
except KeyboardInterrupt:
print("Kernel observer thread stopped.")
observe_kernel()
Found the issue.
I needed to replace
struct packet_buf {
char data[PACKET_BUF_SIZE];
};
with
struct packet_buf {
unsigned char data[PACKET_BUF_SIZE];
};
I, however, do not understand how signedness makes a difference when I am not performing comparisons or arithmetic operations with this data.

Armadillo and OpenMP and stack-use-after-scope

I have an issue with a stack-use-after-scope with error with the C++ Armadillo library within an OpenMP blog in an R package and I cannot figure out what is wrong. The complete gcc log is here from the CRAN GCC ASAN check of the R-package. I have have kept the relevant part of the log below
==33791==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7ffd03364940 at pc 0x7ff8127abc07 bp 0x7ffd03364680 sp 0x7ffd03364670
WRITE of size 4 at 0x7ffd03364940 thread T0
#0 0x7ff8127abc06 in arma::Mat<double>::Mat(double*, unsigned int, unsigned int, bool, bool) /data/gannet/ripley/R/test-3.5/RcppArmadillo/include/armadillo_bits/Mat_meat.hpp:1215
#1 0x7ff8129fb0c2 in GMA<logistic>::solve() [clone ._omp_fn.0] /data/gannet/ripley/R/test-3.5/RcppArmadillo/include/armadillo_bits/Col_meat.hpp:411
#2 0x7ff825ae2cde in GOMP_parallel (/lib64/libgomp.so.1+0xdcde)
#3 0x7ff812a0c9f8 in GMA<logistic>::solve() ddhazard/GMA_solver.cpp:83
#4 0x7ff81276421d in ddhazard_fit_cpp(...
Address 0x7ffd03364940 is located in stack of thread T0 at offset 416 in frame
#0 0x7ff8129fa82f in GMA<logistic>::solve() [clone ._omp_fn.0] ddhazard/GMA_solver.cpp:83
This frame has 5 object(s):
[32, 40) 'dest'
[96, 104) 'src'
[160, 176) 'ans'
[224, 384) 'my_X_cross'
[416, 576) '<unknown>' <== Memory access at offset 416 is inside this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
(longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-use-after-scope /data/gannet/ripley/R/test-3.5/RcppArmadillo/include/armadillo_bits/Mat_meat.hpp:1215 in arma::Mat<double>::Mat(double*, unsigned int, unsigned int, bool, bool)
Shadow bytes around the buggy address:
0x1000206648d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x1000206648e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x1000206648f0: 00 00 00 00 f1 f1 f1 f1 00 f2 f2 f2 f2 f2 f2 f2
0x100020664900: 00 f2 f2 f2 f2 f2 f2 f2 f8 f8 f2 f2 f2 f2 f2 f2
0x100020664910: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x100020664920: 00 00 00 00 f2 f2 f2 f2[f8]f8 f8 f8 f8 f8 f8 f8
0x100020664930: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f3 f3 f3 f3
0x100020664940: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100020664950: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100020664960: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100020664970: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==33791==ABORTING
The WRITE that causes the error is in the dynamichazard/src/ddhazard/GMA_solver.cpp and particularly this OpenMP block
#ifdef _OPENMP
int n_threads = std::max(1, std::min(omp_get_max_threads(),
(int)r_set.n_elem / 1000 + 1));
#pragma omp parallel num_threads(n_threads) if(n_threads > 1)
{
#endif
arma::mat my_X_cross(q, q, arma::fill::zeros);
#ifdef _OPENMP
#pragma omp for schedule(static)
#endif
for(arma::uword i = 0; i < r_set.n_elem; i++){
auto trunc_eta = T::truncate_eta(
is_event[i], eta[i], exp(eta[i]), at_risk_length[i]);
h_1d[i] = w[i] * T::d_log_like(
is_event[i], trunc_eta, at_risk_length[i]);
double h_2d_neg = - w[i] * T::dd_log_like(
is_event[i], trunc_eta, at_risk_length[i]);
sym_mat_rank_one_update(h_2d_neg, X_t.unsafe_col(i), my_X_cross);
}
#ifdef _OPENMP
#pragma omp critical(gma_lock)
{
#endif
X_cross += my_X_cross;
#ifdef _OPENMP
}
}
#endif
As far as I can tell, the error is at the X_t.unsafe_col(i) call in the call to sym_mat_rank_one_update. The declaration of the function is
void sym_mat_rank_one_update(const double, const arma::vec&, arma::mat&);
It should trigger a call to the arma::col<double> constructor in line 411 of include/armadillo_bits/Col_meat.hpp which inherit the arma::mat<double> constructor in line 1215 of include/armadillo_bits/Mat_meat.hpp. I gather this is where the 4 bit write occurs with one of the unsigned int since the arma::mat<double> constructor is
template<typename eT>
inline
Mat<eT>::Mat(eT* aux_mem, const uword aux_n_rows, const uword aux_n_cols, const bool copy_aux_mem, const bool strict)
: n_rows ( aux_n_rows )
, n_cols ( aux_n_cols )
, n_elem ( aux_n_rows*aux_n_cols )
, vec_state( 0 )
, mem_state( copy_aux_mem ? 0 : ( strict ? 2 : 1 ) )
, mem ( copy_aux_mem ? 0 : aux_mem )
{
arma_extra_debug_sigprint_this(this);
if(copy_aux_mem == true)
{
init_cold();
arrayops::copy( memptr(), aux_mem, n_elem );
}
}
where
template<typename eT>
class Mat : public Base< eT, Mat<eT> >
{
public:
typedef eT elem_type; //!< the type of elements stored in the matrix
typedef typename get_pod_type<eT>::result pod_type; //!< if eT is std::complex<T>, pod_type is T; otherwise pod_type is eT
const uword n_rows; //!< number of rows (read-only)
const uword n_cols; //!< number of columns (read-only)
const uword n_elem; //!< number of elements (read-only)
const uhword vec_state; //!< 0: matrix layout; 1: column vector layout; 2: row vector layout
const uhword mem_state;
...
See include/armadillo_bits/Mat_bones.hpp and notice that arma::uword is unsigned int. However, I cannot figure out why this would cause a stack-use-after-scope.
A similar error is in the Morpho package. See the current CRAN log here and src/createL.cpp.
Setup
The above check is on CRAN. As far as I can tell, it is with gcc 7.2 on Fedora 26 with the following config.site used to build R
CXX="g++ -fsanitize=address,undefined,bounds-strict -fno-omit-frame-pointer"
CFLAGS="-g -O2 -Wall -pedantic -mtune=native -fsanitize=address"
FFLAGS="-g -O2 -mtune=native"
FCFLAGS="-g -O2 -mtune=native"
CXXFLAGS="-g -O2 -Wall -pedantic -mtune=native"
MAIN_LDFLAGS=-fsanitize=address,undefined
Further, the following ~/.R/Makevars is used
CC = gcc -std=gnu99 -fsanitize=address,undefined -fno-omit-frame-pointer
F77 = gfortran -fsanitize=address
FC = gfortran -fsanitize=address
FCFLAGS = -g -O2 -mtune=native -fbounds-check
FFLAGS = -g -O2 -mtune=native -fbounds-check
The error does not happen with clang 5.0.0 and valgrind on the same machine. Further, I cannot reproduce them on a local Ubuntu 17.04 with gcc version 6.3 and clang version 4.0.0.
Minimal, Complete, and Verifiable example
I will work on making one.

Method to calculate 'mechListMIC' for SPNEGO GSS-API(NTLMSSP_AUTH) accept-completed(0) state

I am trying to learn and implement SMB2 Server. I am very interested to learn GSS-API (NTLMSSP, NTLMSSP_AUTH) inside. So, I am doing experiment with my own component of GSS-API. I read the description of mechListMIC in RFC4178 & RFC2478. But I couldn’t understand how to calculate mechListMIC for ‘SessionSetup Response, Unknown message type’ response.
Actually, I can generate the mechListMIC for negTokenInit phase of ‘NegotiateProtocol Response’. But the problem is, when client sends ‘SessionSetup Request, NTLMSSP_AUTH, User: Domain\Administrator, Unknown message type’ request, I can’t understand how is it generating ‘mechListMIC: 01 00 00 00 78 1E E9 4A DB 99 7F E9 00 00 00 00’ and how should I send response back in ‘SessionSetup Response, Unknown message type’ with corresponding mechListMIC based on the previous SessionSetup Request.
I tried with the following Info:
SMB2.CSessionSetup.securityBlob.GSSAPI.InitialContextToken.InnerContextToken.SpnegoToken.NegTokenInit.MechTypes , hex data = 30 0C 06 0A 2B 06 01 04 01 82 37 02 02 0A
AND
SMB2.CSessionSetup.securityBlob.GSSAPI.NegotiationToken.NegTokenResp.MechListMic, hex data = 01 00 00 00 78 1E E9 4A DB 99 7F E9 00 00 00 00
SecBuffer SignBuffers[2];
SignBufferDesc.ulVersion = SECBUFFER_VERSION; // SECBUFFER_VERSION = 0
SignBufferDesc.cBuffers = 2;
SignBufferDesc.pBuffers = SignBuffers;
SignBuffers[0] = 30 0C 06 0A 2B 06 01 04 01 82 37 02 02 0A;
SignBuffers[1] = 01 00 00 00 78 1E E9 4A DB 99 7F E9 00 00 00 00;
SignBuffers[0].BufferType = SECBUFFER_DATA; // SECBUFFER_DATA = 1
SignBuffers[1].BufferType = SECBUFFER_TOKEN; // SECBUFFER_TOKEN = 2
Can anyone please tell me what information do I need to use inside HMAC-MD5 (key, data) algorithm to generate mechListMIC for SessionSetup Response and how?
If it is possible to create a step-by-step example using my test case to calculate mechListMIC for ‘SessionSetup Response, Unknown message type’ response, that would be very helpful for me. Please let me know if you need any further information.
Thanks,
Shishir
Please find the answer in MSDN site
http://social.msdn.microsoft.com/Forums/gu-IN/os_fileservices/thread/d00b4e1a-077b-4620-99c7-da7bf86d5212

Whats going on with this byte array?

I have a byte array:
00 01 00 00 00 12 81 00 00 01 00 C8 00 00 00 00 00 08 5C 9F 4F A5 09 45 D4 CE
It is read via StreamReader using UTF8 encoding
// Note I can't change this code, to many component dependent on it.
using (StreamReader streamReader =
new StreamReader(responseStream, Encoding.UTF8, false))
{
string streamData = streamReader.ReadToEnd();
if (requestData.Callback != null)
{
requestData.Callback(response, streamData);
}
}
When that function runs I get the following returned to me (i converted to a byte array)
00 01 00 00 00 12 EF BF BD 00 00 01 00 EF BF BD 00 00 00 00 00 08 5C EF BF BD 4F EF BF BD 09 45 EF BF BD
Somehow I need to take whats returned to me and get it back to the right encoding and the right byte array, but I've tried alot.
Please be aware, I'm working with WP7 limited API.
Hopefully you guys can help.
Thanks!
Update for help...
if I do the following code, it's almost right, only thing that is wrong is the 5th to last byte gets split out.
byte[] writeBuf1 = System.Text.Encoding.UTF8.GetBytes(data);
string buf1string = System.Text.Encoding.BigEndianUnicode.GetString(writeBuf1, 0, writeBuf1.Length);
byte[] writeBuf = System.Text.Encoding.BigEndianUnicode.GetBytes(buf1string);
The original byte array is not encoded as UTF-8. The StreamReader therefore replaces each invalid byte with the replacement character U+FFFD. When that character gets encoded back to UTF-8, this results in the byte sequence EF BF BD. You cannot construct the original byte value from the string because the information is completely lost.

Resources